Discrete Event Simulation Modeling and Analysis of Key Leader Engagements
2012-06-01
to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing
Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.
2016-01-01
The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements
NASA Technical Reports Server (NTRS)
Jansen, Ralph; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.
2015-01-01
The purpose of this presentation is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.
2015-01-01
The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment
NASA Astrophysics Data System (ADS)
Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin
2017-10-01
Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.
Mapping of multiple parameter m-health scenarios to mobile WiMAX QoS variables.
Alinejad, Ali; Philip, N; Istepanian, R S H
2011-01-01
Multiparameter m-health scenarios with bandwidth demanding requirements will be one of key applications in future 4 G mobile communication systems. These applications will potentially require specific spectrum allocations with higher quality of service requirements. Furthermore, one of the key 4 G technologies targeting m-health will be medical applications based on WiMAX systems. Hence, it is timely to evaluate such multiple parametric m-health scenarios over mobile WiMAX networks. In this paper, we address the preliminary performance analysis of mobile WiMAX network for multiparametric telemedical scenarios. In particular, we map the medical QoS to typical WiMAX QoS parameters to optimise the performance of these parameters in typical m-health scenario. Preliminary performance analyses of the proposed multiparametric scenarios are evaluated to provide essential information for future medical QoS requirements and constraints in these telemedical network environments.
Key Performance Parameter Driven Technology Goals for Electric Machines and Power Systems
NASA Technical Reports Server (NTRS)
Bowman, Cheryl; Jansen, Ralph; Brown, Gerald; Duffy, Kirsten; Trudell, Jeffrey
2015-01-01
Transitioning aviation to low carbon propulsion is one of the crucial strategic research thrust and is a driver in the search for alternative propulsion system for advanced aircraft configurations. This work requires multidisciplinary skills coming from multiple entities. The feasibility of scaling up various electric drive system technologies to meet the requirements of a large commercial transport is discussed in terms of key parameters. Functional requirements are identified that impact the power system design. A breakeven analysis is presented to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.
Performance of device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhao, Qi; Ma, Xiongfeng
2016-07-01
Quantum key distribution provides information-theoretically-secure communication. In practice, device imperfections may jeopardise the system security. Device-independent quantum key distribution solves this problem by providing secure keys even when the quantum devices are untrusted and uncharacterized. Following a recent security proof of the device-independent quantum key distribution, we improve the key rate by tightening the parameter choice in the security proof. In practice where the system is lossy, we further improve the key rate by taking into account the loss position information. From our numerical simulation, our method can outperform existing results. Meanwhile, we outline clear experimental requirements for implementing device-independent quantum key distribution. The maximal tolerable error rate is 1.6%, the minimal required transmittance is 97.3%, and the minimal required visibility is 96.8 % .
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
Evaluation of groundwater resources requires the knowledge of the capacity of aquifers to store and transmit ground water. This requires estimates of key hydraulic parameters, such as the transmissivity, among others. The transmissivity T (m2/sec) is a hydrauli...
Requirements Document for Development of a Livermore Tomography Tools Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seetho, I. M.
In this document, we outline an exercise performed at LLNL to evaluate the user interface deficits of a LLNL-developed CT reconstruction software package, Livermore Tomography Tools (LTT). We observe that a difficult-to-use command line interface and the lack of support functions compound to generate a bottleneck in the CT reconstruction process when input parameters to key functions are not well known. Through the exercise of systems engineering best practices, we generate key performance parameters for a LTT interface refresh, and specify a combination of back-end (“test-mode” functions) and front-end (graphical user interface visualization and command scripting tools) solutions to LTT’smore » poor user interface that aim to mitigate issues and lower costs associated with CT reconstruction using LTT. Key functional and non-functional requirements and risk mitigation strategies for the solution are outlined and discussed.« less
Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard.
Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan
2015-01-01
Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase "RHINE WAAL UNIVERSITY" with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy).
Management of physical health in patients with schizophrenia: practical recommendations.
Heald, A; Montejo, A L; Millar, H; De Hert, M; McCrae, J; Correll, C U
2010-06-01
Improved physical health care is a pressing need for patients with schizophrenia. It can be achieved by means of a multidisciplinary team led by the psychiatrist. Key priorities should include: selection of antipsychotic therapy with a low risk of weight gain and metabolic adverse effects; routine assessment, recording and longitudinal tracking of key physical health parameters, ideally by electronic spreadsheets; and intervention to control CVD risk following the same principles as for the general population. A few simple tools to assess and record key physical parameters, combined with lifestyle intervention and pharmacological treatment as indicated, could significantly improve physical outcomes. Effective implementation of strategies to optimise physical health parameters in patients with severe enduring mental illness requires engagement and communication between psychiatrists and primary care in most health settings. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.
High-performance radial AMTEC cell design for ultra-high-power solar AMTEC systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, T.J.; Huang, C.
1999-07-01
Alkali Metal Thermal to Electric Conversion (AMTEC) technology is rapidly maturing for potential application in ultra-high-power solar AMTEC systems required by potential future US Air Force (USAF) spacecraft missions in medium-earth and geosynchronous orbits (MEO and GEO). Solar thermal AMTEC power systems potentially have several important advantages over current solar photovoltaic power systems in ultra-high-power spacecraft applications for USAF MEO and GEO missions. This work presents key aspects of radial AMTEC cell design to achieve high cell performance in solar AMTEC systems delivering larger than 50 kW(e) to support high power USAF missions. These missions typically require AMTEC cell conversionmore » efficiency larger than 25%. A sophisticated design parameter methodology is described and demonstrated which establishes optimum design parameters in any radial cell design to satisfy high-power mission requirements. Specific relationships, which are distinct functions of cell temperatures and pressures, define critical dependencies between key cell design parameters, particularly the impact of parasitic thermal losses on Beta Alumina Solid Electrolyte (BASE) area requirements, voltage, number of BASE tubes, and system power production for both maximum power-per-BASE-area and optimum efficiency conditions. Finally, some high-level system tradeoffs are demonstrated using the design parameter methodology to establish high-power radial cell design requirements and philosophy. The discussion highlights how to incorporate this methodology with sophisticated SINDA/FLUINT AMTEC cell modeling capabilities to determine optimum radial AMTEC cell designs.« less
Code of Federal Regulations, 2013 CFR
2013-07-01
... the atmosphere. (ii) Car-seal or lock-and-key valve closures. Secure any bypass line valve in the closed position with a car-seal or a lock-and-key type configuration. You must visually inspect the seal... sensor. (vii) At least monthly, inspect components for integrity and electrical connections for...
Code of Federal Regulations, 2014 CFR
2014-07-01
... the atmosphere. (ii) Car-seal or lock-and-key valve closures. Secure any bypass line valve in the closed position with a car-seal or a lock-and-key type configuration. You must visually inspect the seal... sensor. (vii) At least monthly, inspect components for integrity and electrical connections for...
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
NASA Astrophysics Data System (ADS)
Zhang, Xiao-bo; Wang, Zhi-xue; Li, Jian-xin; Ma, Jian-hui; Li, Yang; Li, Yan-qiang
In order to facilitate Bluetooth function realization and information can be effectively tracked in the process of production, the vehicle Bluetooth hands-free devices need to download such key parameters as Bluetooth address, CVC license and base plate numbers, etc. Therefore, it is the aim to search simple and effective methods to download parameters for each vehicle Bluetooth hands-free device, and to control and record the use of parameters. In this paper, by means of Bluetooth Serial Peripheral Interface programmer device, the parallel port is switched to SPI. The first step is to download parameters is simulating SPI with the parallel port. To perform SPI function, operating the parallel port in accordance with the SPI timing. The next step is to achieve SPI data transceiver functions according to the programming parameters of options. Utilizing the new method, downloading parameters is fast and accurate. It fully meets vehicle Bluetooth hands-free devices production requirements. In the production line, it has played a large role.
BIOSURFACES AND BIOAVAILABILITY: A NANOSCALE OVERVIEW
Environmentally, contaminant bioavailability is a key parameter in determining exposure assessment and ultimately risk assessment/risk management. Defining bioavailability requires knowledge of the contaminant spatial/temporal disposition and transportability and the thermodyna...
Present and future free-space quantum key distribution
NASA Astrophysics Data System (ADS)
Nordholt, Jane E.; Hughes, Richard J.; Morgan, George L.; Peterson, C. Glen; Wipf, Christopher C.
2002-04-01
Free-space quantum key distribution (QKD), more popularly know as quantum cryptography, uses single-photon free-space optical communications to distribute the secret keys required for secure communications. At Los Alamos National Laboratory we have demonstrated a fully automated system that is capable of operations at any time of day over a horizontal range of several kilometers. This has proven the technology is capable of operation from a spacecraft to the ground, opening up the possibility of QKD between any group of users anywhere on Earth. This system, the prototyping of a new system for use on a spacecraft, and the techniques required for world-wide quantum key distribution will be described. The operational parameters and performance of a system designed to operate between low earth orbit (LEO) and the ground will also be discussed.
Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters
NASA Astrophysics Data System (ADS)
Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.
2018-06-01
Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.
Air Force Handbook. 109th Congress
2009-01-01
FY06 Combat Survivor Evader Locator (CSEL) Acquisition Status Capabilities/Profile Functions /Performance Parameters 38 • Air Force’s primary source for...Broadcast Service (GBS) Capabilities/Profile Acquisition Status Functions /Performance Parameters • Purchase Requirements (Phase 2): • 3 primary ...Operations (AF CONOPS) that support the CSAF and joint vision of combat operations. • AF CONOPS describe key Air Force mission and/or functional areas
Key Parameters for Operator Diagnosis of BWR Plant Condition during a Severe Accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clayton, Dwight A.; Poore, III, Willis P.
2015-01-01
The objective of this research is to examine the key information needed from nuclear power plant instrumentation to guide severe accident management and mitigation for boiling water reactor (BWR) designs (specifically, a BWR/4-Mark I), estimate environmental conditions that the instrumentation will experience during a severe accident, and identify potential gaps in existing instrumentation that may require further research and development. This report notes the key parameters that instrumentation needs to measure to help operators respond to severe accidents. A follow-up report will assess severe accident environmental conditions as estimated by severe accident simulation model analysis for a specific US BWR/4-Markmore » I plant for those instrumentation systems considered most important for accident management purposes.« less
NASA Technical Reports Server (NTRS)
Brunn, D. L.; Wu, S. C.; Thom, E. H.; Mclaughlin, F. D.; Sweetser, B. M.
1980-01-01
An overview of the design of the ORION mobile system is presented. System capability and performance characteristics are outlined. Functional requirements and key performance parameters are stated for each of the nine subsystems. A master design and implementation schedule is given.
Polarization variations in installed fibers and their influence on quantum key distribution systems.
Ding, Yu-Yang; Chen, Hua; Wang, Shuang; He, De-Yong; Yin, Zhen-Qiang; Chen, Wei; Zhou, Zheng; Guo, Guang-Can; Han, Zheng-Fu
2017-10-30
Polarization variations in the installed fibers are complex and volatile, and would severely affect the performances of polarization-sensitive quantum key distribution (QKD) systems. Based on the recorded data about polarization variations of different installed fibers, we establish an analytical methodology to quantitatively evaluate the influence of polarization variations on polarization-sensitive QKD systems. Using the increased quantum bit error rate induced by polarization variations as a key criteria, we propose two parameters - polarization drift time and required tracking speed - to characterize polarization variations. For field buried and aerial fibers with different length, we quantitatively evaluate the influence of polarization variations, and also provide requirements and suggestions for polarization basis alignment modules of QKD systems deployed in different kind of fibers.
Overview of Characterization Techniques for High Speed Crystal Growth
NASA Technical Reports Server (NTRS)
Ravi, K. V.
1984-01-01
Features of characterization requirements for crystals, devices and completed products are discussed. Key parameters of interest in semiconductor processing are presented. Characterization as it applies to process control, diagnostics and research needs is discussed with appropriate examples.
Satellite-instrument system engineering best practices and lessons
NASA Astrophysics Data System (ADS)
Schueler, Carl F.
2009-08-01
This paper focuses on system engineering development issues driving satellite remote sensing instrumentation cost and schedule. A key best practice is early assessment of mission and instrumentation requirements priorities driving performance trades among major instrumentation measurements: Radiometry, spatial field of view and image quality, and spectral performance. Key lessons include attention to technology availability and applicability to prioritized requirements, care in applying heritage, approaching fixed-price and cost-plus contracts with appropriate attention to risk, and assessing design options with attention to customer preference as well as design performance, and development cost and schedule. A key element of success either in contract competition or execution is team experience. Perhaps the most crucial aspect of success, however, is thorough requirements analysis and flowdown to specifications driving design performance with sufficient parameter margin to allow for mistakes or oversights - the province of system engineering from design inception to development, test and delivery.
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
Estimation of end point foot clearance points from inertial sensor data.
Santhiranayagam, Braveena K; Lai, Daniel T H; Begg, Rezaul K; Palaniswami, Marimuthu
2011-01-01
Foot clearance parameters provide useful insight into tripping risks during walking. This paper proposes a technique for the estimate of key foot clearance parameters using inertial sensor (accelerometers and gyroscopes) data. Fifteen features were extracted from raw inertial sensor measurements, and a regression model was used to estimate two key foot clearance parameters: First maximum vertical clearance (m x 1) after toe-off and the Minimum Toe Clearance (MTC) of the swing foot. Comparisons are made against measurements obtained using an optoelectronic motion capture system (Optotrak), at 4 different walking speeds. General Regression Neural Networks (GRNN) were used to estimate the desired parameters from the sensor features. Eight subjects foot clearance data were examined and a Leave-one-subject-out (LOSO) method was used to select the best model. The best average Root Mean Square Errors (RMSE) across all subjects obtained using all sensor features at the maximum speed for m x 1 was 5.32 mm and for MTC was 4.04 mm. Further application of a hill-climbing feature selection technique resulted in 0.54-21.93% improvement in RMSE and required fewer input features. The results demonstrated that using raw inertial sensor data with regression models and feature selection could accurately estimate key foot clearance parameters.
TORABIPOUR, Amin; ZERAATI, Hojjat; ARAB, Mohammad; RASHIDIAN, Arash; AKBARI SARI, Ali; SARZAIEM, Mahmuod Reza
2016-01-01
Background: To determine the hospital required beds using stochastic simulation approach in cardiac surgery departments. Methods: This study was performed from Mar 2011 to Jul 2012 in three phases: First, collection data from 649 patients in cardiac surgery departments of two large teaching hospitals (in Tehran, Iran). Second, statistical analysis and formulate a multivariate linier regression model to determine factors that affect patient's length of stay. Third, develop a stochastic simulation system (from admission to discharge) based on key parameters to estimate required bed capacity. Results: Current cardiac surgery department with 33 beds can only admit patients in 90.7% of days. (4535 d) and will be required to over the 33 beds only in 9.3% of days (efficient cut off point). According to simulation method, studied cardiac surgery department will requires 41–52 beds for admission of all patients in the 12 next years. Finally, one-day reduction of length of stay lead to decrease need for two hospital beds annually. Conclusion: Variation of length of stay and its affecting factors can affect required beds. Statistic and stochastic simulation model are applied and useful methods to estimate and manage hospital beds based on key hospital parameters. PMID:27957466
Pressurization and expulsion of cryogenic liquids: Generic requirements for a low gravity experiment
NASA Technical Reports Server (NTRS)
Vandresar, Neil T.; Stochl, Robert J.
1991-01-01
Requirements are presented for an experiment designed to obtain data for the pressurization and expulsion of a cryogenic supply tank in a low gravity environment. These requirements are of a generic nature and applicable to any cryogenic fluid of interest, condensible or non-condensible pressurants, and various low gravity test platforms such as the Space Shuttle or a free-flyer. Background information, the thermophysical process, preliminary analytical modeling, and experimental requirements are discussed. Key parameters, measurements, hardware requirements, procedures, a test matrix, and data analysis are outlined.
Optical components damage parameters database system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong
2012-10-01
Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.
Partially Turboelectric Aircraft Drive Key Performance Parameters
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Duffy, Kirsten P.; Brown, Gerald V.
2017-01-01
The purpose of this paper is to propose electric drive specific power, electric drive efficiency, and electrical propulsion fraction as the key performance parameters for a partially turboelectric aircraft power system and to investigate their impact on the overall aircraft performance. Breguet range equations for a base conventional turbofan aircraft and a partially turboelectric aircraft are found. The benefits and costs that may result from the partially turboelectric system are enumerated. A break even analysis is conducted to find the minimum allowable electric drive specific power and efficiency, for a given electrical propulsion fraction, that can preserve the range, fuel weight, operating empty weight, and payload weight of the conventional aircraft. Current and future power system performance is compared to the required performance to determine the potential benefit.
Galileo Station Keeping Strategy
NASA Technical Reports Server (NTRS)
Perez-Cambriles, Antonio; Bejar-Romero, Juan Antonio; Aguilar-Taboada, Daniel; Perez-Lopez, Fernando; Navarro, Daniel
2007-01-01
This paper presents analyses done for the design and implementation of the Maneuver Planning software of the Galileo Flight Dynamics Facility. The station keeping requirements of the constellation have been analyzed in order to identify the key parameters to be taken into account in the design and implementation of the software.
Means and Method for Measurement of Drilling Fluid Properties
NASA Astrophysics Data System (ADS)
Lysyannikov, A.; Kondrashov, P.; Pavlova, P.
2016-06-01
The paper addresses the problem on creation of a new design of the device for determining rheological parameters of drilling fluids and the basic requirements which it must meet. The key quantitative parameters that define the developed device are provided. The algorithm of determining the coefficient of the yield point from the rheological Shvedov- Bingam model at a relative speed of rotation of glasses from the investigated drilling fluid of 300 and 600 rpm is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavgorodnya, Oleksandra; Shamshina, Julia L.; Bonner, Jonathan R.
Here, we report the correlation between key solution properties and spinability of chitin from the ionic liquid (IL) 1-ethyl-3-methylimidazolium acetate ([C 2mim][OAc]), and the similarities and differences to electrospinning solutions of non-ionic polymers in volatile organic compounds (VOCs). We found that when electrospinning is conducted from ILs, conductivity and surface tension are not the key parameters regulating spinability, while solution viscosity and polymer concentration are. Contrarily, for electrospinning of polymers from VOCs, solution conductivity and viscosity have been reported to be among some of the most important factors controlling fiber formation. For chitin electrospun from [C 2mim][OAc], we found bothmore » a critical chitin concentration required for continuous fiber formation (> 0.20 wt%) and a required viscosity for the spinning solution (between ca. 450 – 1500 cP). The high viscosities of the biopolymer-IL solutions made it possible to electrospin solutions with low, less than 1 wt% of polymer concentration and produce thin fibers without the need to adjust the electrospinning parameters. These results suggest new prospects for the control of fiber architecture in non-woven mats, which is crucial for materials performance.« less
Zavgorodnya, Oleksandra; Shamshina, Julia L.; Bonner, Jonathan R.; ...
2017-04-27
Here, we report the correlation between key solution properties and spinability of chitin from the ionic liquid (IL) 1-ethyl-3-methylimidazolium acetate ([C 2mim][OAc]), and the similarities and differences to electrospinning solutions of non-ionic polymers in volatile organic compounds (VOCs). We found that when electrospinning is conducted from ILs, conductivity and surface tension are not the key parameters regulating spinability, while solution viscosity and polymer concentration are. Contrarily, for electrospinning of polymers from VOCs, solution conductivity and viscosity have been reported to be among some of the most important factors controlling fiber formation. For chitin electrospun from [C 2mim][OAc], we found bothmore » a critical chitin concentration required for continuous fiber formation (> 0.20 wt%) and a required viscosity for the spinning solution (between ca. 450 – 1500 cP). The high viscosities of the biopolymer-IL solutions made it possible to electrospin solutions with low, less than 1 wt% of polymer concentration and produce thin fibers without the need to adjust the electrospinning parameters. These results suggest new prospects for the control of fiber architecture in non-woven mats, which is crucial for materials performance.« less
Practical quantum key distribution protocol without monitoring signal disturbance.
Sasaki, Toshihiko; Yamamoto, Yoshihisa; Koashi, Masato
2014-05-22
Quantum cryptography exploits the fundamental laws of quantum mechanics to provide a secure way to exchange private information. Such an exchange requires a common random bit sequence, called a key, to be shared secretly between the sender and the receiver. The basic idea behind quantum key distribution (QKD) has widely been understood as the property that any attempt to distinguish encoded quantum states causes a disturbance in the signal. As a result, implementation of a QKD protocol involves an estimation of the experimental parameters influenced by the eavesdropper's intervention, which is achieved by randomly sampling the signal. If the estimation of many parameters with high precision is required, the portion of the signal that is sacrificed increases, thus decreasing the efficiency of the protocol. Here we propose a QKD protocol based on an entirely different principle. The sender encodes a bit sequence onto non-orthogonal quantum states and the receiver randomly dictates how a single bit should be calculated from the sequence. The eavesdropper, who is unable to learn the whole of the sequence, cannot guess the bit value correctly. An achievable rate of secure key distribution is calculated by considering complementary choices between quantum measurements of two conjugate observables. We found that a practical implementation using a laser pulse train achieves a key rate comparable to a decoy-state QKD protocol, an often-used technique for lasers. It also has a better tolerance of bit errors and of finite-sized-key effects. We anticipate that this finding will give new insight into how the probabilistic nature of quantum mechanics can be related to secure communication, and will facilitate the simple and efficient use of conventional lasers for QKD.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Systems Analysis of the Hydrogen Transition with HyTrans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leiby, Paul Newsome; Greene, David L; Bowman, David Charles
2007-01-01
The U.S. Federal government is carefully considering the merits and long-term prospects of hydrogen-fueled vehicles. NAS (1) has called for the careful application of systems analysis tools to structure the complex assessment required. Others, raising cautionary notes, question whether a consistent and plausible transition to hydrogen light-duty vehicles can identified (2) and whether that transition would, on balance, be environmentally preferred. Modeling the market transition to hydrogen-powered vehicles is an inherently complex process, encompassing hydrogen production, delivery and retailing, vehicle manufacturing, and vehicle choice and use. We describe the integration of key technological and market factors in a dynamic transitionmore » model, HyTrans. The usefulness of HyTrans and its predictions depends on three key factors: (1) the validity of the economic theories that underpin the model, (2) the authenticity with which the key processes are represented, and (3) the accuracy of specific parameter values used in the process representations. This paper summarizes the theoretical basis of HyTrans, and highlights the implications of key parameter specifications with sensitivity analysis.« less
NASA Technical Reports Server (NTRS)
Parsons, C. L. (Editor)
1989-01-01
The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.
Butterfly Encryption Scheme for Resource-Constrained Wireless Networks †
Sampangi, Raghav V.; Sampalli, Srinivas
2015-01-01
Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis. PMID:26389899
Butterfly Encryption Scheme for Resource-Constrained Wireless Networks.
Sampangi, Raghav V; Sampalli, Srinivas
2015-09-15
Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis.
Wang, Baosheng; Tao, Jing
2018-01-01
Revocation functionality and hierarchy key delegation are two necessary and crucial requirements to identity-based cryptosystems. Revocable hierarchical identity-based encryption (RHIBE) has attracted a lot of attention in recent years, many RHIBE schemes have been proposed but shown to be either insecure or bounded where they have to fix the maximum hierarchical depth of RHIBE at setup. In this paper, we propose a new unbounded RHIBE scheme with decryption key exposure resilience and with short public system parameters, and prove our RHIBE scheme to be adaptively secure. Our system model is scalable inherently to accommodate more levels of user adaptively with no adding workload or restarting the system. By carefully designing the hybrid games, we overcome the subtle obstacle in applying the dual system encryption methodology for the unbounded and revocable HIBE. To the best of our knowledge, this is the first construction of adaptively secure unbounded RHIBE scheme. PMID:29649326
NASA Astrophysics Data System (ADS)
Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz
2017-08-01
Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
USDA-ARS?s Scientific Manuscript database
Extreme hydrological processes are often very dynamic and destructive.A better understanding of these processes requires an accurate mapping of key variables that control them. In this regard, soil moisture is perhaps the most important parameter that impacts the magnitude of flooding events as it c...
Deep ocean corrosion research in support of Oman India gas pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, F.W.; McKeehan, D.S.
1995-12-01
The increasing interest in deepwater exploration and production has motivated the development of technologies required to accomplish tasks heretofore possible only onshore and in shallow water. The tremendous expense of technology development and the cost of specialized equipment has created concerns that the design life of these facilities may be compromised by corrosion. The requirements to develop and prove design parameters to meet these demands will require an ongoing environmental testing and materials evaluation and development program. This paper describes a two-fold corrosion testing program involving: (1) the installation of two corrosion test devices installed in-situ, and (2) a laboratorymore » test conducted in simulated site-specific seawater. These tests are expected to qualify key parameters necessary to design a cathodic protection system to protect the Oman-to-India pipeline.« less
Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio;
2016-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.
2014-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei
2016-09-20
InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.
Parameter Estimation for Viscoplastic Material Modeling
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.
1997-01-01
A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.
The art and science of missile defense sensor design
NASA Astrophysics Data System (ADS)
McComas, Brian K.
2014-06-01
A Missile Defense Sensor is a complex optical system, which sits idle for long periods of time, must work with little or no on-board calibration, be used to find and discriminate targets, and guide the kinetic warhead to the target within minutes of launch. A short overview of the Missile Defense problem will be discussed here, as well as, the top-level performance drivers, like Noise Equivalent Irradiance (NEI), Acquisition Range, and Dynamic Range. These top-level parameters influence the choice of optical system, mechanical system, focal plane array (FPA), Read Out Integrated Circuit (ROIC), and cryogenic system. This paper will not only discuss the physics behind the performance of the sensor, but it will also discuss the "art" of optimizing the performance of the sensor given the top level performance parameters. Balancing the sensor sub-systems is key to the sensor's performance in these highly stressful missions. Top-level performance requirements impact the choice of lower level hardware and requirements. The flow down of requirements to the lower level hardware will be discussed. This flow down directly impacts the FPA, where careful selection of the detector is required. The flow down also influences the ROIC and cooling requirements. The key physics behind the detector and cryogenic system interactions will be discussed, along with the balancing of subsystem performance. Finally, the overall system balance and optimization will be discussed in the context of missile defense sensors and expected performance of the overall kinetic warhead.
Improved key-rate bounds for practical decoy-state quantum-key-distribution systems
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng
2017-01-01
The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.
Boom Minimization Framework for Supersonic Aircraft Using CFD Analysis
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Rallabhandi, Sriram K.
2010-01-01
A new framework is presented for shape optimization using analytical shape functions and high-fidelity computational fluid dynamics (CFD) via Cart3D. The focus of the paper is the system-level integration of several key enabling analysis tools and automation methods to perform shape optimization and reduce sonic boom footprint. A boom mitigation case study subject to performance, stability and geometrical requirements is presented to demonstrate a subset of the capabilities of the framework. Lastly, a design space exploration is carried out to assess the key parameters and constraints driving the design.
Assaying Mitochondrial Respiration as an Indicator of Cellular Metabolism and Fitness.
Smolina, Natalia; Bruton, Joseph; Kostareva, Anna; Sejersen, Thomas
2017-01-01
Mitochondrial respiration is the most important generator of cellular energy under most circumstances. It is a process of energy conversion of substrates into ATP. The Seahorse equipment allows measuring oxygen consumption rate (OCR) in living cells and estimates key parameters of mitochondrial respiration in real-time mode. Through use of mitochondrial inhibitors, four key mitochondrial respiration parameters can be measured: basal, ATP production-linked, maximal, and proton leak-linked OCR. This approach requires application of mitochondrial inhibitors-oligomycin to block ATP synthase, FCCP-to make the inner mitochondrial membrane permeable for protons and allow maximum electron flux through the electron transport chain, and rotenone and antimycin A-to inhibit complexes I and III, respectively. This chapter describes the protocol of OCR assessment in the culture of primary myotubes obtained upon satellite cell fusion.
Asymmetric cryptography based on wavefront sensing.
Peng, Xiang; Wei, Hengzheng; Zhang, Peng
2006-12-15
A system of asymmetric cryptography based on wavefront sensing (ACWS) is proposed for the first time to our knowledge. One of the most significant features of the asymmetric cryptography is that a trapdoor one-way function is required and constructed by analogy to wavefront sensing, in which the public key may be derived from optical parameters, such as the wavelength or the focal length, while the private key may be obtained from a kind of regular point array. The ciphertext is generated by the encoded wavefront and represented with an irregular array. In such an ACWS system, the encryption key is not identical to the decryption key, which is another important feature of an asymmetric cryptographic system. The processes of asymmetric encryption and decryption are formulized mathematically and demonstrated with a set of numerical experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barron, Robert W.; McJeon, Haewon C.
2015-05-01
This paper considers the effect of several key parameters of low carbon energy technologies on the cost of abatement. A methodology for determining the minimum level of performance required for a parameter to have a statistically significant impact on CO2 abatement cost is developed and used to evaluate the impact of eight key parameters of low carbon energy supply technologies on the cost of CO2 abatement. The capital cost of nuclear technology is found to have the greatest impact of the parameters studied. The cost of biomass and CCS technologies also have impacts, while their efficiencies have little, if any.more » Sensitivity analysis of the results with respect to population, GDP, and CO2 emission constraint show that the minimum performance level and impact of nuclear technologies is consistent across the socioeconomic scenarios studied, while the other technology parameters show different performance under higher population, lower GDP scenarios. Solar technology was found to have a small impact, and then only at very low costs. These results indicate that the cost of nuclear is the single most important driver of abatement cost, and that trading efficiency for cost may make biomass and CCS technologies more competitive.« less
NASA Astrophysics Data System (ADS)
Liu, Weiqi; Huang, Peng; Peng, Jinye; Fan, Jianping; Zeng, Guihua
2018-02-01
For supporting practical quantum key distribution (QKD), it is critical to stabilize the physical parameters of signals, e.g., the intensity, phase, and polarization of the laser signals, so that such QKD systems can achieve better performance and practical security. In this paper, an approach is developed by integrating a support vector regression (SVR) model to optimize the performance and practical security of the QKD system. First, a SVR model is learned to precisely predict the time-along evolutions of the physical parameters of signals. Second, such predicted time-along evolutions are employed as feedback to control the QKD system for achieving the optimal performance and practical security. Finally, our proposed approach is exemplified by using the intensity evolution of laser light and a local oscillator pulse in the Gaussian modulated coherent state QKD system. Our experimental results have demonstrated three significant benefits of our SVR-based approach: (1) it can allow the QKD system to achieve optimal performance and practical security, (2) it does not require any additional resources and any real-time monitoring module to support automatic prediction of the time-along evolutions of the physical parameters of signals, and (3) it is applicable to any measurable physical parameter of signals in the practical QKD system.
What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.
2012-12-01
A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less
Ethnographic field work in requirements engineering
NASA Astrophysics Data System (ADS)
Reddivari, Sandeep; Asaithambi, Asai; Niu, Nan; Wang, Wentao; Xu, Li Da; Cheng, Jing-Ru C.
2017-01-01
The requirements engineering (RE) processes have become a key in developing and deploying enterprise information system (EIS) for organisations and corporations in various fields and industrial sectors. Ethnography is a contextual method allowing scientific description of the stakeholders, their needs and their organisational customs. Despite the recognition in the RE literature that ethnography could be helpful, the actual leverage of the method has been limited and ad hoc. To overcome the problems, we report in this paper a systematic mapping study where the relevant literature is examined. Building on the literature review, we further identify key parameters, their variations and their connections. The improved understanding about the role of ethnography in EIS RE is then presented in a consolidated model, and the guidelines of how to apply ethnography are organised by the key factors uncovered. Our study can direct researchers towards thorough understanding about the role that ethnography plays in EIS RE, and more importantly, to help practitioners better integrate contextually rich and ecologically valid methods in their daily practices.
Improving security of the ping-pong protocol
NASA Astrophysics Data System (ADS)
Zawadzki, Piotr
2013-01-01
A security layer for the asymptotically secure ping-pong protocol is proposed and analyzed in the paper. The operation of the improvement exploits inevitable errors introduced by the eavesdropping in the control and message modes. Its role is similar to the privacy amplification algorithms known from the quantum key distribution schemes. Messages are processed in blocks which guarantees that an eavesdropper is faced with a computationally infeasible problem as long as the system parameters are within reasonable limits. The introduced additional information preprocessing does not require quantum memory registers and confidential communication is possible without prior key agreement or some shared secret.
Design of Diaphragm and Coil for Stable Performance of an Eddy Current Type Pressure Sensor.
Lee, Hyo Ryeol; Lee, Gil Seung; Kim, Hwa Young; Ahn, Jung Hwan
2016-07-01
The aim of this work was to develop an eddy current type pressure sensor and investigate its fundamental characteristics affected by the mechanical and electrical design parameters of sensor. The sensor has two key components, i.e., diaphragm and coil. On the condition that the outer diameter of sensor is 10 mm, two key parts should be designed so as to keep a good linearity and sensitivity. Experiments showed that aluminum is the best target material for eddy current detection. A round-grooved diaphragm is suggested in order to measure more precisely its deflection caused by applied pressures. The design parameters of a round-grooved diaphragm can be selected depending on the measuring requirements. A developed pressure sensor with diaphragm of t = 0.2 mm and w = 1.05 mm was verified to measure pressure up to 10 MPa with very good linearity and errors of less than 0.16%.
Will Quantitative Proteomics Redefine Some of the Key Concepts in Skeletal Muscle Physiology?
Gizak, Agnieszka; Rakus, Dariusz
2016-01-11
Molecular and cellular biology methodology is traditionally based on the reasoning called "the mechanistic explanation". In practice, this means identifying and selecting correlations between biological processes which result from our manipulation of a biological system. In theory, a successful application of this approach requires precise knowledge about all parameters of a studied system. However, in practice, due to the systems' complexity, this requirement is rarely, if ever, accomplished. Typically, it is limited to a quantitative or semi-quantitative measurements of selected parameters (e.g., concentrations of some metabolites), and a qualitative or semi-quantitative description of expression/post-translational modifications changes within selected proteins. A quantitative proteomics approach gives a possibility of quantitative characterization of the entire proteome of a biological system, in the context of the titer of proteins as well as their post-translational modifications. This enables not only more accurate testing of novel hypotheses but also provides tools that can be used to verify some of the most fundamental dogmas of modern biology. In this short review, we discuss some of the consequences of using quantitative proteomics to verify several key concepts in skeletal muscle physiology.
NASA Astrophysics Data System (ADS)
Jindal, Sumit Kumar; Mahajan, Ankush; Raghuwanshi, Sanjeev Kumar
2017-10-01
An analytical model and numerical simulation for the performance of MEMS capacitive pressure sensors in both normal and touch modes is required for expected behavior of the sensor prior to their fabrication. Obtaining such information should be based on a complete analysis of performance parameters such as deflection of diaphragm, change of capacitance when the diaphragm deflects, and sensitivity of the sensor. In the literature, limited work has been carried out on the above-stated issue; moreover, due to approximation factors of polynomials, a tolerance error cannot be overseen. Reliable before-fabrication forecasting requires exact mathematical calculation of the parameters involved. A second-order polynomial equation is calculated mathematically for key performance parameters of both modes. This eliminates the approximation factor, and an exact result can be studied, maintaining high accuracy. The elimination of approximation factors and an approach of exact results are based on a new design parameter (δ) that we propose. The design parameter gives an initial hint to the designers on how the sensor will behave once it is fabricated. The complete work is aided by extensive mathematical detailing of all the parameters involved. Next, we verified our claims using MATLAB® simulation. Since MATLAB® effectively provides the simulation theory for the design approach, more complicated finite element method is not used.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
[Investigation of Elekta linac characteristics for VMAT].
Luo, Guangwen; Zhang, Kunyi
2012-01-01
The aim of this study is to investigate the characteristics of Elekta delivery system for volumetric modulated arc therapy (VMAT). Five VMAT plans were delivered in service mode and dose rates, and speed of gantry and MLC leaves were analyzed by log files. Results showed that dose rates varied between 6 dose rates. Gantry and MLC leaf speed dynamically varied during delivery. The technique of VMAT requires linac to dynamically control more parameters, and these key dynamic variables during VMAT delivery can be checked by log files. Quality assurance procedure should be carried out for VMAT related parameter.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2018-06-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2017-11-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
Self-referenced continuous-variable quantum key distribution protocol
Soh, Daniel Beom Soo; Sarovar, Mohan; Brif, Constantin; ...
2015-10-21
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice’s and Bob’s measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of themore » protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. Furthermore, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.« less
Self-referenced continuous-variable quantum key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soh, Daniel Beom Soo; Sarovar, Mohan; Brif, Constantin
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice’s and Bob’s measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of themore » protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. Furthermore, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.« less
Self-Referenced Continuous-Variable Quantum Key Distribution Protocol
NASA Astrophysics Data System (ADS)
Soh, Daniel B. S.; Brif, Constantin; Coles, Patrick J.; Lütkenhaus, Norbert; Camacho, Ryan M.; Urayama, Junji; Sarovar, Mohan
2015-10-01
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice's and Bob's measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of the protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. As such, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.
Formability Analysis of Bamboo Fabric Reinforced Poly (Lactic) Acid Composites
M. R., Nurul Fazita; Jayaraman, Krishnan; Bhattacharyya, Debes
2016-01-01
Poly (lactic) acid (PLA) composites have made their way into various applications that may require thermoforming to produce 3D shapes. Wrinkles are common in many forming processes and identification of the forming parameters to prevent them in the useful part of the mechanical component is a key consideration. Better prediction of such defects helps to significantly reduce the time required for a tooling design process. The purpose of the experiment discussed here is to investigate the effects of different test parameters on the occurrence of deformations during sheet forming of double curvature shapes with bamboo fabric reinforced-PLA composites. The results demonstrated that the domes formed using hot tooling conditions were better in quality than those formed using cold tooling conditions. Wrinkles were more profound in the warp direction of the composite domes compared to the weft direction. Grid Strain Analysis (GSA) identifies the regions of severe deformation and provides useful information regarding the optimisation of processing parameters. PMID:28773662
Trade Spaces in Crewed Spacecraft Atmosphere Revitalization System Development
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Bagdigian, Robert M.; Carrasquillo, Robyn L.
2010-01-01
Developing the technological response to realizing an efficient atmosphere revitalization system for future crewed spacecraft and space habitats requires identifying and describing functional trade spaces. Mission concepts and requirements dictate the necessary functions; however, the combination and sequence of those functions possess significant flexibility. Us-ing a closed loop environmental control and life support (ECLS) system architecture as a starting basis, a functional unit operations approach is developed to identify trade spaces. Generalized technological responses to each trade space are discussed. Key performance parameters that apply to functional areas are described.
Surface Nuclear Power for Human Mars Missions
NASA Technical Reports Server (NTRS)
Mason, Lee S.
1999-01-01
The Design Reference Mission for NASA's human mission to Mars indicates the desire for in-situ propellant production and bio-regenerative life systems to ease Earth launch requirements. These operations, combined with crew habitation and science, result in surface power requirements approaching 160 kilowatts. The power system, delivered on an early cargo mission, must be deployed and operational prior to crew departure from Earth. The most mass efficient means of satisfying these requirements is through the use of nuclear power. Studies have been performed to identify a potential system concept using a mobile cart to transport the power system away from the Mars lander and provide adequate separation between the reactor and crew. The studies included an assessment of reactor and power conversion technology options, selection of system and component redundancy, determination of optimum separation distance, and system performance sensitivity to some key operating parameters. The resulting system satisfies the key mission requirements including autonomous deployment, high reliability, and cost effectiveness at a overall system mass of 12 tonnes and a stowed volume of about 63 cu m.
ESR paper on structured reporting in radiology.
2018-02-01
Structured reporting is emerging as a key element of optimising radiology's contribution to patient outcomes and ensuring the value of radiologists' work. It is being developed and supported by many national and international radiology societies, based on the recognised need to use uniform language and structure to accurately describe radiology findings. Standardisation of report structures ensures that all relevant areas are addressed. Standardisation of terminology prevents ambiguity in reports and facilitates comparability of reports. The use of key data elements and quantified parameters in structured reports ("radiomics") permits automatic functions (e.g. TNM staging), potential integration with other clinical parameters (e.g. laboratory results), data sharing (e.g. registries, biobanks) and data mining for research, teaching and other purposes. This article outlines the requirements for a successful structured reporting strategy (definition of content and structure, standard terminologies, tools and protocols). A potential implementation strategy is outlined. Moving from conventional prose reports to structured reporting is endorsed as a positive development, and must be an international effort, with international design and adoption of structured reporting templates that can be translated and adapted in local environments as needed. Industry involvement is key to success, based on international data standards and guidelines. • Standardisation of radiology report structure ensures completeness and comparability of reports. • Use of standardised language in reports minimises ambiguity. • Structured reporting facilitates automatic functions, integration with other clinical parameters and data sharing. • International and inter-society cooperation is key to developing successful structured report templates. • Integration with industry providers of radiology-reporting software is also crucial.
Laser diode technology for coherent communications
NASA Technical Reports Server (NTRS)
Channin, D. J.; Palfrey, S. L.; Toda, M.
1989-01-01
The effect of diode laser characteristics on the overall performance capabilities of coherent communication systems is discussed. In particular, attention is given to optical performance issues for diode lasers in coherent systems, measurements of key performance parameters, and optical requirements for coherent single-channel and multichannel communication systems. The discussion also covers limitations imposed by diode laser optical performance on multichannel system capabilities and implications for future developments.
NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-05-01
We present a rigorous security analysis of continuous-variable measurement-device-independent quantum key distribution (CV MDI QKD) in a finite-size scenario. The security proof is obtained in two steps: by first assessing the security against collective Gaussian attacks, and then extending to the most general class of coherent attacks via the Gaussian de Finetti reduction. Our result combines recent state-of-the-art security proofs for CV QKD with findings about min-entropy calculus and parameter estimation. In doing so, we improve the finite-size estimate of the secret key rate. Our conclusions confirm that CV MDI protocols allow for high rates on the metropolitan scale, and may achieve a nonzero secret key rate against the most general class of coherent attacks after 107-109 quantum signal transmissions, depending on loss and noise, and on the required level of security.
Extending the performance of KrF laser for microlithography by using novel F2 control technology
NASA Astrophysics Data System (ADS)
Zambon, Paolo; Gong, Mengxiong; Carlesi, Jason; Padmabandu, Gunasiri G.; Binder, Mike; Swanson, Ken; Das, Palash P.
2000-07-01
Exposure tools for 248nm lithography have reached a level of maturity comparable to those based on i-line. With this increase in maturity, there is a concomitant requirement for greater flexibility from the laser by the process engineers. Usually, these requirements pertain to energy, spectral width and repetition rate. By utilizing a combination of laser parameters, the process engineers are often able to optimize throughput, reduce cost-of-operation or achieve greater process margin. Hitherto, such flexibility of laser operation was possible only via significant changes to various laser modules. During our investigation, we found that the key measure of the laser that impacts the aforementioned parameters is its F2 concentration. By monitoring and controlling its slope efficiency, the laser's F2 concentration may be precisely controlled. Thus a laser may tune to operate under specifications as diverse as 7mJ, (Delta) (lambda) FWHM < 0.3 pm and 10mJ, (Delta) (lambda) FWHM < 0.6pm and still meet the host of requirements necessary for lithography. We discus this new F2 control technique and highlight some laser performance parameters.
Requirements Flowdown for Prognostics and Health Management
NASA Technical Reports Server (NTRS)
Goebel, Kai; Saxena, Abhinav; Roychoudhury, Indranil; Celaya, Jose R.; Saha, Bhaskar; Saha, Sankalita
2012-01-01
Prognostics and Health Management (PHM) principles have considerable promise to change the game of lifecycle cost of engineering systems at high safety levels by providing a reliable estimate of future system states. This estimate is a key for planning and decision making in an operational setting. While technology solutions have made considerable advances, the tie-in into the systems engineering process is lagging behind, which delays fielding of PHM-enabled systems. The derivation of specifications from high level requirements for algorithm performance to ensure quality predictions is not well developed. From an engineering perspective some key parameters driving the requirements for prognostics performance include: (1) maximum allowable Probability of Failure (PoF) of the prognostic system to bound the risk of losing an asset, (2) tolerable limits on proactive maintenance to minimize missed opportunity of asset usage, (3) lead time to specify the amount of advanced warning needed for actionable decisions, and (4) required confidence to specify when prognosis is sufficiently good to be used. This paper takes a systems engineering view towards the requirements specification process and presents a method for the flowdown process. A case study based on an electric Unmanned Aerial Vehicle (e-UAV) scenario demonstrates how top level requirements for performance, cost, and safety flow down to the health management level and specify quantitative requirements for prognostic algorithm performance.
Conversion and matched filter approximations for serial minimum-shift keyed modulation
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.
1982-01-01
Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.
NASA Technical Reports Server (NTRS)
Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.
2007-01-01
In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.
User's Guide for the Agricultural Non-Point Source (AGNPS) Pollution Model Data Generator
Finn, Michael P.; Scheidt, Douglas J.; Jaromack, Gregory M.
2003-01-01
BACKGROUND Throughout this user guide, we refer to datasets that we used in conjunction with developing of this software for supporting cartographic research and producing the datasets to conduct research. However, this software can be used with these datasets or with more 'generic' versions of data of the appropriate type. For example, throughout the guide, we refer to national land cover data (NLCD) and digital elevation model (DEM) data from the U.S. Geological Survey (USGS) at a 30-m resolution, but any digital terrain model or land cover data at any appropriate resolution will produce results. Another key point to keep in mind is to use a consistent data resolution for all the datasets per model run. The U.S. Department of Agriculture (USDA) developed the Agricultural Nonpoint Source (AGNPS) pollution model of watershed hydrology in response to the complex problem of managing nonpoint sources of pollution. AGNPS simulates the behavior of runoff, sediment, and nutrient transport from watersheds that have agriculture as their prime use. The model operates on a cell basis and is a distributed parameter, event-based model. The model requires 22 input parameters. Output parameters are grouped primarily by hydrology, sediment, and chemical output (Young and others, 1995.) Elevation, land cover, and soil are the base data from which to extract the 22 input parameters required by the AGNPS. For automatic parameter extraction, follow the general process described in this guide of extraction from the geospatial data through the AGNPS Data Generator to generate input parameters required by the pollution model (Finn and others, 2002.)
CBM Resources/reserves classification and evaluation based on PRMS rules
NASA Astrophysics Data System (ADS)
Fa, Guifang; Yuan, Ruie; Wang, Zuoqian; Lan, Jun; Zhao, Jian; Xia, Mingjun; Cai, Dechao; Yi, Yanjing
2018-02-01
This paper introduces a set of definitions and classification requirements for coalbed methane (CBM) resources/reserves, based on Petroleum Resources Management System (PRMS). The basic CBM classification criterions of 1P, 2P, 3P and contingent resources are put forward from the following aspects: ownership, project maturity, drilling requirements, testing requirements, economic requirements, infrastructure and market, timing of production and development, and so on. The volumetric method is used to evaluate the OGIP, with focuses on analyses of key parameters and principles of the parameter selection, such as net thickness, ash and water content, coal rank and composition, coal density, cleat volume and saturation and absorbed gas content etc. A dynamic method is used to assess the reserves and recovery efficiency. Since the differences in rock and fluid properties, displacement mechanism, completion and operating practices and wellbore type resulted in different production curve characteristics, the factors affecting production behavior, the dewatering period, pressure build-up and interference effects were analyzed. The conclusion and results that the paper achieved can be used as important references for reasonable assessment of CBM resources/reserves.
Jennings, T A; Scheer, A; Emodi, A; Puderbach, L; King, S; Norton, T
1996-01-01
The principle objectives of this paper are (a), to develop the rationale for conducting an inspection qualification (IQ) and operational qualification (OQ) of a vacuum freeze-dryer; (b), to identify the key elements that require verification for completion of the IQ; and (c), to establish the necessary environmental and operational parameters necessary for the OQ of the vacuum freeze-dryer.
Capsule implosion optimization during the indirect-drive National Ignition Campaign
NASA Astrophysics Data System (ADS)
Landen, O. L.; Edwards, J.; Haan, S. W.; Robey, H. F.; Milovich, J.; Spears, B. K.; Weber, S. V.; Clark, D. S.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.; Atherton, J.; Amendt, P. A.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Frenje, J. A.; Glenzer, S. H.; Hamza, A.; Hammel, B. A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kilkenny, J. D.; Kirkwood, R. K.; Kline, J. L.; Kyrala, G. A.; Marinak, M. M.; Meezan, N.; Meyerhofer, D. D.; Michel, P.; Munro, D. H.; Olson, R. E.; Nikroo, A.; Regan, S. P.; Suter, L. J.; Thomas, C. A.; Wilson, D. C.
2011-05-01
Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analytic models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.
Achieving mask order processing automation, interoperability and standardization based on P10
NASA Astrophysics Data System (ADS)
Rodriguez, B.; Filies, O.; Sadran, D.; Tissier, Michel; Albin, D.; Stavroulakis, S.; Voyiatzis, E.
2007-02-01
Last year the MUSCLE (Masks through User's Supply Chain: Leadership by Excellence) project was presented. Here is the project advancement. A key process in mask supply chain management is the exchange of technical information for ordering masks. This process is large, complex, company specific and error prone, and leads to longer cycle times and higher costs due to missing or wrong inputs. Its automation and standardization could produce significant benefits. We need to agree on the standard for mandatory and optional parameters, and also a common way to describe parameters when ordering. A system was created to improve the performance in terms of Key Performance Indicators (KPIs) such as cycle time and cost of production. This tool allows us to evaluate and measure the effect of factors, as well as the effect of implementing the improvements of the complete project. Next, a benchmark study and a gap analysis were performed. These studies show the feasibility of standardization, as there is a large overlap in requirements. We see that the SEMI P10 standard needs enhancements. A format supporting the standard is required, and XML offers the ability to describe P10 in a flexible way. Beyond using XML for P10, the semantics of the mask order should also be addressed. A system design and requirements for a reference implementation for a P10 based management system are presented, covering a mechanism for the evolution and for version management and a design for P10 editing and data validation.
Ma, Athen; Mondragón, Raúl J.
2015-01-01
A core comprises of a group of central and densely connected nodes which governs the overall behaviour of a network. It is recognised as one of the key meso-scale structures in complex networks. Profiling this meso-scale structure currently relies on a limited number of methods which are often complex and parameter dependent or require a null model. As a result, scalability issues are likely to arise when dealing with very large networks together with the need for subjective adjustment of parameters. The notion of a rich-club describes nodes which are essentially the hub of a network, as they play a dominating role in structural and functional properties. The definition of a rich-club naturally emphasises high degree nodes and divides a network into two subgroups. Here, we develop a method to characterise a rich-core in networks by theoretically coupling the underlying principle of a rich-club with the escape time of a random walker. The method is fast, scalable to large networks and completely parameter free. In particular, we show that the evolution of the core in World Trade and C. elegans networks correspond to responses to historical events and key stages in their physical development, respectively. PMID:25799585
Ma, Athen; Mondragón, Raúl J
2015-01-01
A core comprises of a group of central and densely connected nodes which governs the overall behaviour of a network. It is recognised as one of the key meso-scale structures in complex networks. Profiling this meso-scale structure currently relies on a limited number of methods which are often complex and parameter dependent or require a null model. As a result, scalability issues are likely to arise when dealing with very large networks together with the need for subjective adjustment of parameters. The notion of a rich-club describes nodes which are essentially the hub of a network, as they play a dominating role in structural and functional properties. The definition of a rich-club naturally emphasises high degree nodes and divides a network into two subgroups. Here, we develop a method to characterise a rich-core in networks by theoretically coupling the underlying principle of a rich-club with the escape time of a random walker. The method is fast, scalable to large networks and completely parameter free. In particular, we show that the evolution of the core in World Trade and C. elegans networks correspond to responses to historical events and key stages in their physical development, respectively.
Analysis of navigation and guidance requirements for commercial VTOL operations
NASA Technical Reports Server (NTRS)
Hoffman, W. C.; Zvara, J.; Hollister, W. M.
1975-01-01
The paper presents some results of a program undertaken to define navigation and guidance requirements for commercial VTOL operations in the takeoff, cruise, terminal and landing phases of flight in weather conditions up to and including Category III. Quantitative navigation requirements are given for the parameters range, coverage, operation near obstacles, horizontal accuracy, multiple landing aircraft, multiple pad requirements, inertial/radio-inertial requirements, reliability/redundancy, update rate, and data link requirements in all flight phases. A multi-configuration straw-man navigation and guidance system for commercial VTOL operations is presented. Operation of the system is keyed to a fully automatic approach for navigation, guidance and control, with pilot as monitor-manager. The system is a hybrid navigator using a relatively low-cost inertial sensor with DME updates and MLS in the approach/departure phases.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
NASA Astrophysics Data System (ADS)
Alligné, S.; Decaix, J.; Müller, A.; Nicolet, C.; Avellan, F.; Münch, C.
2017-04-01
Due to the massive penetration of alternative renewable energies, hydropower is a key energy conversion technology for stabilizing the electrical power network by using hydraulic machines at off design operating conditions. At full load, the axisymmetric cavitation vortex rope developing in Francis turbines acts as an internal source of energy, leading to an instability commonly referred to as self-excited surge. 1-D models are developed to predict this phenomenon and to define the range of safe operating points for a hydropower plant. These models require a calibration of several parameters. The present work aims at identifying these parameters by using CFD results as objective functions for an optimization process. A 2-D Venturi and 3-D Francis turbine are considered.
Earthquake ground motion: Chapter 3
Luco, Nicolas; Kircher, Charles A.; Crouse, C. B.; Charney, Finley; Haselton, Curt B.; Baker, Jack W.; Zimmerman, Reid; Hooper, John D.; McVitty, William; Taylor, Andy
2016-01-01
Most of the effort in seismic design of buildings and other structures is focused on structural design. This chapter addresses another key aspect of the design process—characterization of earthquake ground motion into parameters for use in design. Section 3.1 describes the basis of the earthquake ground motion maps in the Provisions and in ASCE 7 (the Standard). Section 3.2 has examples for the determination of ground motion parameters and spectra for use in design. Section 3.3 describes site-specific ground motion requirements and provides example site-specific design and MCER response spectra and example values of site-specific ground motion parameters. Section 3.4 discusses and provides an example for the selection and scaling of ground motion records for use in various types of response history analysis permitted in the Standard.
Hone, J.; Pech, R.; Yip, P.
1992-01-01
Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed. PMID:1582476
The frequency response of dynamic friction: Enhanced rate-and-state models
NASA Astrophysics Data System (ADS)
Cabboi, A.; Putelat, T.; Woodhouse, J.
2016-07-01
The prediction and control of friction-induced vibration requires a sufficiently accurate constitutive law for dynamic friction at the sliding interface: for linearised stability analysis, this requirement takes the form of a frictional frequency response function. Systematic measurements of this frictional frequency response function are presented for small samples of nylon and polycarbonate sliding against a glass disc. Previous efforts to explain such measurements from a theoretical model have failed, but an enhanced rate-and-state model is presented which is shown to match the measurements remarkably well. The tested parameter space covers a range of normal forces (10-50 N), of sliding speeds (1-10 mm/s) and frequencies (100-2000 Hz). The key new ingredient in the model is the inclusion of contact stiffness to take into account elastic deformations near the interface. A systematic methodology is presented to discriminate among possible variants of the model, and then to identify the model parameter values.
Scholz, Norman; Behnke, Thomas; Resch-Genger, Ute
2018-01-01
Micelles are of increasing importance as versatile carriers for hydrophobic substances and nanoprobes for a wide range of pharmaceutical, diagnostic, medical, and therapeutic applications. A key parameter indicating the formation and stability of micelles is the critical micelle concentration (CMC). In this respect, we determined the CMC of common anionic, cationic, and non-ionic surfactants fluorometrically using different fluorescent probes and fluorescence parameters for signal detection and compared the results with conductometric and surface tension measurements. Based upon these results, requirements, advantages, and pitfalls of each method are discussed. Our study underlines the versatility of fluorometric methods that do not impose specific requirements on surfactants and are especially suited for the quantification of very low CMC values. Conductivity and surface tension measurements yield smaller uncertainties particularly for high CMC values, yet are more time- and substance consuming and not suitable for every surfactant.
Observing the ExoEarth: Simulating the Retrieval of Exoplanet Parameters Using DSCOVR
NASA Astrophysics Data System (ADS)
Kane, S.; Cowan, N. B.; Domagal-Goldman, S. D.; Herman, J. R.; Robinson, T.; Stine, A.
2017-12-01
The field of exoplanets has rapidly expanded from detection to include exoplanet characterization. This has been enabled by developments such as the detection of terrestrial-sized planets and the use of transit spectroscopy to study exoplanet atmospheres. Studies of rocky planets are leading towards the direct imaging of exoplanets and the development of techniques to extract their intrinsic properties. The importance of properties such as rotation, albedo, and obliquity are significant since they inform planet formation theories and are key input parameters for Global Circulation Models used to determine surface conditions, including habitability. Thus, a complete characterization of exoplanets for understanding habitable climates requires the ability to measure these key planetary parameters. The retrieval of planetary rotation rates, albedos, and obliquities from highly undersampled imaging data can be honed using satellites designed to study the Earth's atmosphere. In this talk I will describe how the Deep Space Climate Observatory (DSCOVR) provides a unique opportunity to test such retrieval methods using data for the sunlit hemisphere of the Earth. Our methods use the high-resolution DSCOVR-EPIC images to simulate the Earth as an exoplanet, by deconvolving the images to match a variety of expected exoplanet mission requirements, and by comparing EPIC data with the cavity radiometer data from DSCOVR-NISTAR that views the Earth as a single pixel. Through this methodology, we are creating a grid of retrieval states as a function of image resolution, observing cadence, passband, etc. Our modeling of the DSCOVR data will provide an effective baseline from which to develop tools that can be applied to a variety of exoplanet imaging data.
NASA Astrophysics Data System (ADS)
Fusco, T.; Villecroze, R.; Jarno, A.; Bacon, R.
2011-09-01
The second generation instrument MUSE for the VLT has been designed to profit of the ESO Adaptive Optics Facility (AOF). The two Adaptive Optics (AO) modes (GLAO in Wide Field Mode [WFM] and LTAO in Narrow Field Mode [NFM]) will be used. To achieve its key science goals, MUSE will require information on the full system (Atmosphere, AO, telescope and instrument) image quality and its variation with Field position and wavelength. For example, optimal summation of a large number of deep field exposures in WFM will require a good knowledge of the PSF. In this paper, we will present an exhaustive analysis of the MUSE Wide Field Mode PSF evolution both spatially and spectrally. For that purpose we have coupled a complete AO simulation tool developed at ONERA with the MUSE instrumental PSF simulation. Relative impact of atmospheric and system parameters (seeing, Cn^2, LGS and NGS positions etc ...) with respect to differential MUSE aberrations per channel (i.e. slicer and IFU) is analysed. The results allow us (in close collaboration with astronomers) to define pertinent parameters (fit parameters using a Moffat function) for a PSF reconstruction process (estimation of this parameters using GLAO telemetry) and to propose an efficient and robust algorithm to be implemented in the MUSE pipeline. The extension of the spatial and spectral PSF analysis to the NFM case is discussed and preliminary results are given. Some specific requirements for the generalisation of the GLAO PSF reconstruction process to the LTAO case are derived from these early results.
NASA Astrophysics Data System (ADS)
Ireland, Gareth; North, Matthew R.; Petropoulos, George P.; Srivastava, Prashant K.; Hodges, Crona
2015-04-01
Acquiring accurate information on the spatio-temporal variability of soil moisture content (SM) and evapotranspiration (ET) is of key importance to extend our understanding of the Earth system's physical processes, and is also required in a wide range of multi-disciplinary research studies and applications. The utility and applicability of Earth Observation (EO) technology provides an economically feasible solution to derive continuous spatio-temporal estimates of key parameters characterising land surface interactions, including ET as well as SM. Such information is of key value to practitioners, decision makers and scientists alike. The PREMIER-EO project recently funded by High Performance Computing Wales (HPCW) is a research initiative directed towards the development of a better understanding of EO technology's present ability to derive operational estimations of surface fluxes and SM. Moreover, the project aims at addressing knowledge gaps related to the operational estimation of such parameters, and thus contribute towards current ongoing global efforts towards enhancing the accuracy of those products. In this presentation we introduce the PREMIER-EO project, providing a detailed overview of the research aims and objectives for the 1 year duration of the project's implementation. Subsequently, we make available the initial results of the work carried out herein, in particular, related to an all-inclusive and robust evaluation of the accuracy of existing operational products of ET and SM from different ecosystems globally. The research outcomes of this project, once completed, will provide an important contribution towards addressing the knowledge gaps related to the operational estimation of ET and SM. This project results will also support efforts ongoing globally towards the operational development of related products using technologically advanced EO instruments which were launched recently or planned be launched in the next 1-2 years. Key Words: PREMIER-EO, HPC Wales, Soil Moisture, Evapotranspiration, , Earth Observation
NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-06-01
One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.
Models for the Economics of Resilience
Gilbert, Stanley; Ayyub, Bilal M.
2016-01-01
Estimating the economic burden of disasters requires appropriate models that account for key characteristics and decision making needs. Natural disasters in 2011 resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Average annual losses in the US amount to about $55 billion. Enhancing community and system resilience could lead to significant savings through risk reduction and expeditious recovery. The management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics with models for examining the economics of resilience. This paper provides such microeconomic models, compares them, examines their sensitivities to key parameters, and illustrates their uses. Such models enable improving the resiliency of systems to meet target levels. PMID:28133626
Models for the Economics of Resilience.
Gilbert, Stanley; Ayyub, Bilal M
2016-12-01
Estimating the economic burden of disasters requires appropriate models that account for key characteristics and decision making needs. Natural disasters in 2011 resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Average annual losses in the US amount to about $55 billion. Enhancing community and system resilience could lead to significant savings through risk reduction and expeditious recovery. The management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics with models for examining the economics of resilience. This paper provides such microeconomic models, compares them, examines their sensitivities to key parameters, and illustrates their uses. Such models enable improving the resiliency of systems to meet target levels.
International Space Station Major Constituent Analyzer On-Orbit Performance
NASA Technical Reports Server (NTRS)
Gardner, Ben D.; Erwin, Phillip M.; Thoresen, Souzan; Wiedemann, Rachel; Matty, Chris
2015-01-01
The Major Constituent Analyzer is a mass spectrometer based system that measures the major atmospheric constituents on the International Space Station. A number of limited-life components require periodic change-out, including the ORU 02 analyzer and the ORU 08 Verification Gas Assembly. Improvements to ion pump operation and ion source tuning have improved lifetime performance of the current ORU 02 design. The most recent ORU 02 analyzer assemblies, as well as ORU 08, have operated nominally. For ORU 02, the ion source filaments and ion pump lifetime continue to be key determinants of MCA performance and logistical support. Monitoring several key parameters provides the capacity to monitor ORU health and properly anticipate end of life.
NASA Astrophysics Data System (ADS)
Fan, Shuwei; Bai, Liang; Chen, Nana
2016-08-01
As one of the key elements of high-power laser systems, the pulse compression multilayer dielectric grating is required for broader band, higher diffraction efficiency and higher damage threshold. In this paper, the multilayer dielectric film and the multilayer dielectric gratings(MDG) were designed by eigen matrix and optimized with the help of generic algorithm and rigorous coupled wave method. The reflectivity was close to 100% and the bandwith were over 250nm, twice compared to the unoptimized film structure. The simulation software of standing wave field distribution within MDG was developed and the electric field of the MDG was calculated. And the key parameters which affected the electric field distribution were also studied.
Surface radiation budget for climate applications
NASA Technical Reports Server (NTRS)
Suttles, J. T. (Editor); Ohring, G. (Editor)
1986-01-01
The Surface Radiation Budget (SRB) consists of the upwelling and downwelling radiation fluxes at the surface, separately determined for the broadband shortwave (SW) (0 to 5 micron) and longwave (LW) (greater than 5 microns) spectral regions plus certain key parameters that control these fluxes, specifically, SW albedo, LW emissivity, and surface temperature. The uses and requirements for SRB data, critical assessment of current capabilities for producing these data, and directions for future research are presented.
A Review of United States Air Force and Department of Defense Aerospace Propulsion Needs
2006-01-01
evolved expendable launch vehicle EHF extremely high frequency EMA electromechanical actuator EMDP engine model derivative program EMTVA...condition. A key aspect of the model was which of the two methods was used—parameters of the system or propulsion variables produced in the design ... models for turbopump analysis and design . In addition, the skills required to design a high -performance turbopump are very specialized and must be
Prusakowski, Melanie K; Chen, Audrey P
2017-02-01
Pediatric sepsis is distinct from adult sepsis in its definitions, clinical presentations, and management. Recognition of pediatric sepsis is complicated by the various pediatric-specific comorbidities that contribute to its mortality and the age- and development-specific vital sign and clinical parameters that obscure its recognition. This article outlines the clinical presentation and management of sepsis in neonates, infants, and children, and highlights some key populations who require specialized care. Copyright © 2016 Elsevier Inc. All rights reserved.
Review of concrete biodeterioration in relation to nuclear waste.
Turick, Charles E; Berry, Christopher J
2016-01-01
Storage of radioactive waste in concrete structures is a means of containing wastes and related radionuclides generated from nuclear operations in many countries. Previous efforts related to microbial impacts on concrete structures that are used to contain radioactive waste showed that microbial activity can play a significant role in the process of concrete degradation and ultimately structural deterioration. This literature review examines the research in this field and is focused on specific parameters that are applicable to modeling and prediction of the fate of concrete structures used to store or dispose of radioactive waste. Rates of concrete biodegradation vary with the environmental conditions, illustrating a need to understand the bioavailability of key compounds involved in microbial activity. Specific parameters require pH and osmotic pressure to be within a certain range to allow for microbial growth as well as the availability and abundance of energy sources such as components involved in sulfur, iron and nitrogen oxidation. Carbon flow and availability are also factors to consider in predicting concrete biodegradation. The microbial contribution to degradation of the concrete structures containing radioactive waste is a constant possibility. The rate and degree of concrete biodegradation is dependent on numerous physical, chemical and biological parameters. Parameters to focus on for modeling activities and possible options for mitigation that would minimize concrete biodegradation are discussed and include key conditions that drive microbial activity on concrete surfaces. Copyright © 2015. Published by Elsevier Ltd.
System-level view of geospace dynamics: Challenges for high-latitude ground-based observations
NASA Astrophysics Data System (ADS)
Donovan, E.
2014-12-01
Increasingly, research programs including GEM, CEDAR, GEMSIS, GO Canada, and others are focusing on how geospace works as a system. Coupling sits at the heart of system level dynamics. In all cases, coupling is accomplished via fundamental processes such as reconnection and plasma waves, and can be between regions, energy ranges, species, scales, and energy reservoirs. Three views of geospace are required to attack system level questions. First, we must observe the fundamental processes that accomplish the coupling. This "observatory view" requires in situ measurements by satellite-borne instruments or remote sensing from powerful well-instrumented ground-based observatories organized around, for example, Incoherent Scatter Radars. Second, we need to see how this coupling is controlled and what it accomplishes. This demands quantitative observations of the system elements that are being coupled. This "multi-scale view" is accomplished by networks of ground-based instruments, and by global imaging from space. Third, if we take geospace as a whole, the system is too complicated, so at the top level we need time series of simple quantities such as indices that capture important aspects of the system level dynamics. This requires a "key parameter view" that is typically provided through indices such as AE and DsT. With the launch of MMS, and ongoing missions such as THEMIS, Cluster, Swarm, RBSP, and ePOP, we are entering a-once-in-a-lifetime epoch with a remarkable fleet of satellites probing processes at key regions throughout geospace, so the observatory view is secure. With a few exceptions, our key parameter view provides what we need. The multi-scale view, however, is compromised by space/time scales that are important but under-sampled, combined extent of coverage and resolution that falls short of what we need, and inadequate conjugate observations. In this talk, I present an overview of what we need for taking system level research to its next level, and how high latitude ground based observations can address these challenges.
Assessment of the Economic Potential of Distributed Wind in Colorado, Minnesota, and New York
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baring-Gould, Edward I; McCabe, Kevin; Sigrin, Benjamin O
Stakeholders in the small and distributed wind space require access to better tools and data for more informed decisions on high-impact topics, including project planning, policymaking, and funding allocation. A major challenge in obtaining improved information is in the identification of favorable sites - namely, the intersection of sufficient wind resource with economic parameters such as retail rates, incentives, and other policies. This presentation made at the AWEA WINDPOWER Conference and Exhibition in Chicago in 2018 explores the researchers' objective: To understand the spatial variance of key distributed wind parameters and identify where they intersect to form pockets of favorablemore » areas in Colorado, Minnesota, and New York.« less
Performance analysis of SS7 congestion controls under sustained overload
NASA Astrophysics Data System (ADS)
Manfield, David R.; Millsteed, Gregory K.; Zukerman, Moshe
1994-04-01
Congestion controls are a key factor in achieving the robust performance required of common channel signaling (CCS) networks in the face of partial network failures and extreme traffic loads, especially as networks become large and carry high traffic volume. The CCITT recommendations define a number of types of congestion control, and the parameters of the controls must be well set in order to ensure their efficacy under transient and sustained signalling network overload. The objective of this paper is to present a modeling approach to the determination of the network parameters that govern the performance of the SS7 congestion controls under sustained overload. Results of the investigation by simulation are presented and discussed.
Design integration for minimal energy and cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halldane, J.E.
The authors present requirements for creating alternative energy conserving designs including energy management and architectural, plumbing, mechanical, electrical, electronic and optical design. Parameters of power, energy, life cycle costs and benefit for resource for an evaluation by the interested parties are discussed. They present an analysis of power systems through a seasonal power distribution diagram. An analysis of cost systems includes capital cost from the power components, annual costs from the utility energy use, and finance costs with loans, taxes, settlement and design fees. Equations are transposed to the evaluative parameter and are uniquely explicit with consistent symbols, parameter definitions,more » dual and balanced units, unit conversions, criteria for operation, incorporated constants for rapid calculations, references to data in the handbook, other common terms, and instrumentation for the measurement. Each component equation has a key power diagram.« less
Dewhirst, Oliver P; Roskilly, Kyle; Hubel, Tatjana Y; Jordan, Neil R; Golabek, Krystyna A; McNutt, J Weldon; Wilson, Alan M
2017-02-01
Changes in stride frequency and length with speed are key parameters in animal locomotion research. They are commonly measured in a laboratory on a treadmill or by filming trained captive animals. Here, we show that a clustering approach can be used to extract these variables from data collected by a tracking collar containing a GPS module and tri-axis accelerometers and gyroscopes. The method enables stride parameters to be measured during free-ranging locomotion in natural habitats. As it does not require labelled data, it is particularly suitable for use with difficult to observe animals. The method was tested on large data sets collected from collars on free-ranging lions and African wild dogs and validated using a domestic dog. © 2017. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Majd, Nayereh; Ghasemi, Zahra
2016-10-01
We have investigated a TPTQ state as an input state of a non-ideal ferromagnetic detectors. Minimal spin polarization required to demonstrate spin entanglement according to entanglement witness and CHSH inequality with respect to (w.r.t.) their two free parameters have been found, and we have numerically shown that the entanglement witness is less stringent than the direct tests of Bell's inequality in the form of CHSH in the entangled limits of its free parameters. In addition, the lower limits of spin detection efficiency fulfilling secure cryptographic key against eavesdropping have been derived. Finally, we have considered TPTQ state as an output of spin decoherence channel and the region of ballistic transmission time w.r.t. spin relaxation time and spin dephasing time has been found.
NASA Astrophysics Data System (ADS)
Yu, Hao Yun; Liu, Chun-Hung; Shen, Yu Tian; Lee, Hsuan-Ping; Tsai, Kuen Yu
2014-03-01
Line edge roughness (LER) influencing the electrical performance of circuit components is a key challenge for electronbeam lithography (EBL) due to the continuous scaling of technology feature sizes. Controlling LER within an acceptable tolerance that satisfies International Technology Roadmap for Semiconductors requirements while achieving high throughput become a challenging issue. Although lower dosage and more-sensitive resist can be used to improve throughput, they would result in serious LER-related problems because of increasing relative fluctuation in the incident positions of electrons. Directed self-assembly (DSA) is a promising technique to relax LER-related pattern fidelity (PF) requirements because of its self-healing ability, which may benefit throughput. To quantify the potential of throughput improvement in EBL by introducing DSA for post healing, rigorous numerical methods are proposed to simultaneously maximize throughput by adjusting writing parameters of EBL systems subject to relaxed LER-related PF requirements. A fast, continuous model for parameter sweeping and a hybrid model for more accurate patterning prediction are employed for the patterning simulation. The tradeoff between throughput and DSA self-healing ability is investigated. Preliminary results indicate that significant throughput improvements are achievable at certain process conditions.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...
2015-02-05
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less
NASA Technical Reports Server (NTRS)
Parmar, Devendra S.; Shams, Qamar A.
2002-01-01
The strategy of NASA to explore space objects in the vicinity of Earth and other planets of the solar system includes robotic and human missions. This strategy requires a road map for technology development that will support the robotic exploration and provide safety for the humans traveling to other celestial bodies. Aeroassist is one of the key elements of technology planning for the success of future robot and human exploration missions to other celestial bodies. Measurement of aerothermodynamic parameters such as temperature, pressure, and acceleration is of prime importance for aeroassist technology implementation and for the safety and affordability of the mission. Instrumentation and methods to measure such parameters have been reviewed in this report in view of past practices, current commercial availability of instrumentation technology, and the prospects of improvement and upgrade according to the requirements. Analysis of the usability of each identified instruments in terms of cost for efficient weight-volume ratio, power requirement, accuracy, sample rates, and other appropriate metrics such as harsh environment survivability has been reported.
Influences on cocaine tolerance assessed under a multiple conjunctive schedule of reinforcement.
Yoon, Jin Ho; Branch, Marc N
2009-11-01
Under multiple schedules of reinforcement, previous research has generally observed tolerance to the rate-decreasing effects of cocaine that has been dependent on schedule-parameter size in the context of fixed-ratio (FR) schedules, but not under the context of fixed-interval (FI) schedules of reinforcement. The current experiment examined the effects of cocaine on key-pecking responses of White Carneau pigeons maintained under a three-component multiple conjunctive FI (10 s, 30 s, & 120 s) FR (5 responses) schedule of food presentation. Dose-effect curves representing the effects of presession cocaine on responding were assessed in the context of (1) acute administration of cocaine (2) chronic administration of cocaine and (3) daily administration of saline. Chronic administration of cocaine generally resulted in tolerance to the response-rate decreasing effects of cocaine, and that tolerance was generally independent of relative FI value, as measured by changes in ED50 values. Daily administration of saline decreased ED50 values to those observed when cocaine was administered acutely. The results show that adding a FR requirement to FI schedules is not sufficient to produce schedule-parameter-specific tolerance. Tolerance to cocaine was generally independent of FI-parameter under the present conjunctive schedules, indicating that a ratio requirement, per se, is not sufficient for tolerance to be dependent on FI parameter.
Key Generation for Fast Inversion of the Paillier Encryption Function
NASA Astrophysics Data System (ADS)
Hirano, Takato; Tanaka, Keisuke
We study fast inversion of the Paillier encryption function. Especially, we focus only on key generation, and do not modify the Paillier encryption function. We propose three key generation algorithms based on the speeding-up techniques for the RSA encryption function. By using our algorithms, the size of the private CRT exponent is half of that of Paillier-CRT. The first algorithm employs the extended Euclidean algorithm. The second algorithm employs factoring algorithms, and can construct the private CRT exponent with low Hamming weight. The third algorithm is a variant of the second one, and has some advantage such as compression of the private CRT exponent and no requirement for factoring algorithms. We also propose the settings of the parameters for these algorithms and analyze the security of the Paillier encryption function by these algorithms against known attacks. Finally, we give experimental results of our algorithms.
Simultaneous classical communication and quantum key distribution using continuous variables*
NASA Astrophysics Data System (ADS)
Qi, Bing
2016-10-01
Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.
Capsule implosion optimization during the indirect-drive National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O. L.; Edwards, J.; Haan, S. W.
2011-05-15
Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analyticmore » models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.« less
Functional phases and angular momentum characteristics of Tkatchev and Kovacs.
Irwin, Gareth; Exell, Timothy A; Manning, Michelle L; Kerwin, David G
2017-03-01
Understanding the technical requirements and underlying biomechanics of complex release and re-grasp skills on high bar allows coaches and scientists to develop safe and effective training programmes. The aim of this study was to examine the differences in the functional phases between the Tkatchev and Kovacs skills and to explain how the angular momentum demands are addressed. Images of 18 gymnasts performing 10 Tkatchevs and 8 Kovacs at the Olympic Games were recorded (50 Hz), digitised and reconstructed (3D Direct Linear Transformation). Orientation of the functional phase action, defined by the rapid flexion to extension of the shoulders and extension to flexion of the hips as the performer passed through the lower vertical, along with shoulder and hip angular kinematics, angular momentum and key release parameters (body angle, mass centre velocity and angular momentum about the mass centre and bar) were compared between skills. Expected differences in the release parameters of angle, angular momentum and velocity were observed and the specific mechanical requirement of each skill were highlighted. Whilst there were no differences in joint kinematics, hip and shoulder functional phase were significantly earlier in the circle for the Tkatchev. These findings highlight the importance of the orientation of the functional phase in the preceding giant swing and provide coaches with further understanding of the critical timing in this key phase.
Xiong, Jianyin; Huang, Shaodan; Zhang, Yinping
2012-01-01
The diffusion coefficient (D m) and material/air partition coefficient (K) are two key parameters characterizing the formaldehyde and volatile organic compounds (VOC) sorption behavior in building materials. By virtue of the sorption process in airtight chamber, this paper proposes a novel method to measure the two key parameters, as well as the convective mass transfer coefficient (h m). Compared to traditional methods, it has the following merits: (1) the K, D m and h m can be simultaneously obtained, thus is convenient to use; (2) it is time-saving, just one sorption process in airtight chamber is required; (3) the determination of h m is based on the formaldehyde and VOC concentration data in the test chamber rather than the generally used empirical correlations obtained from the heat and mass transfer analogy, thus is more accurate and can be regarded as a significant improvement. The present method is applied to measure the three parameters by treating the experimental data in the literature, and good results are obtained, which validates the effectiveness of the method. Our new method also provides a potential pathway for measuring h m of semi-volatile organic compounds (SVOC) by using that of VOC. PMID:23145156
Taghipoor, Masoomeh; van Milgen, Jaap; Gondret, Florence
2016-09-07
Variations in energy storage and expenditure are key elements for animals adaptation to rapidly changing environments. Because of the multiplicity of metabolic pathways, metabolic crossroads and interactions between anabolic and catabolic processes within and between different cells, the flexibility of energy stores in animal cells is difficult to describe by simple verbal, textual or graphic terms. We propose a mathematical model to study the influence of internal and external challenges on the dynamic behavior of energy stores and its consequence on cell energy status. The role of the flexibility of energy stores on the energy equilibrium at the cellular level is illustrated through three case studies: variation in eating frequency (i.e., glucose input), level of physical activity (i.e., ATP requirement), and changes in cell characteristics (i.e., maximum capacity of glycogen storage). Sensitivity analysis has been performed to highlight the most relevant parameters of the model; model simulations have then been performed to illustrate how variation in these key parameters affects cellular energy balance. According to this analysis, glycogen maximum accumulation capacity and homeostatic energy demand are among the most important parameters regulating muscle cell metabolism to ensure its energy equilibrium. Copyright © 2016 Elsevier Ltd. All rights reserved.
Emergency medical services key performance measurement in Asian cities.
Rahman, Nik Hisamuddin; Tanaka, Hideharu; Shin, Sang Do; Ng, Yih Yng; Piyasuwankul, Thammapad; Lin, Chih-Hao; Ong, Marcus Eng Hock
2015-01-01
One of the key principles in the recommended standards is that emergency medical service (EMS) providers should continuously monitor the quality and safety of their services. This requires service providers to implement performance monitoring using appropriate and relevant measures including key performance indicators. In Asia, EMS systems are at different developmental phases and maturity. This will create difficultly in benchmarking or assessing the quality of EMS performance across the region. An attempt was made to compare the EMS performance index based on the structure, process, and outcome analysis. The data was collected from the Pan-Asian Resuscitation Outcome Study (PAROS) data among few Asian cities, namely, Tokyo, Osaka, Singapore, Bangkok, Kuala Lumpur, Taipei, and Seoul. The parameters of inclusions were broadly divided into structure, process, and outcome measurements. The data was collected by the site investigators from each city and keyed into the electronic web-based data form which is secured strictly by username and passwords. Generally, there seems to be a more uniformity for EMS performance parameters among the more developed EMS systems. The major problem with the EMS agencies in the cities of developing countries like Bangkok and Kuala Lumpur is inadequate or unavailable data pertaining to EMS performance. There is non-uniformity in the EMS performance measurement across the Asian cities. This creates difficulty for EMS performance index comparison and benchmarking. Hopefully, in the future, collaborative efforts such as the PAROS networking group will further enhance the standardization in EMS performance reporting across the region.
Karayanidis, Frini; Jamadar, Sharna; Ruge, Hannes; Phillips, Natalie; Heathcote, Andrew; Forstmann, Birte U.
2010-01-01
Recent research has taken advantage of the temporal and spatial resolution of event-related brain potentials (ERPs) and functional magnetic resonance imaging (fMRI) to identify the time course and neural circuitry of preparatory processes required to switch between different tasks. Here we overview some key findings contributing to understanding strategic processes in advance preparation. Findings from these methodologies are compatible with advance preparation conceptualized as a set of processes activated for both switch and repeat trials, but with substantial variability as a function of individual differences and task requirements. We then highlight new approaches that attempt to capitalize on this variability to link behavior and brain activation patterns. One approach examines correlations among behavioral, ERP and fMRI measures. A second “model-based” approach accounts for differences in preparatory processes by estimating quantitative model parameters that reflect latent psychological processes. We argue that integration of behavioral and neuroscientific methodologies is key to understanding the complex nature of advance preparation in task-switching. PMID:21833196
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
2013-06-01
1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team
National Unmanned Aerial System Standardized Performance Testing and Rating (NUSTAR)
NASA Technical Reports Server (NTRS)
Kopardekar, Parimal
2016-01-01
The overall objective of the NUSTAR Capability is to offer standardized tests and scenario conditions to assess performance of the UAS. The following are goals of the NU-STAR: 1. Create a prototype standardized tests and scenarios that vehicles can be tested against. 2. Identify key performance parameters of all UAS and their standardized measurement strategy. 3. Develop standardized performance reporting method (e.g., consumer report style) to assist prospective buyers. 4. Identify key performance metrics that could be used by judged towards overall safety of the UAS and operations. 5. If vehicle certification standard is made by a regulatory agency, the performance of individual UAS could be compared against the minimum requirement (e.g., sense and avoid detection time, stopping distance, kinetic energy, etc.).
Comparative Model Evaluation Studies of Biogenic Trace Gas Fluxes in Tropical Forests
NASA Technical Reports Server (NTRS)
Potter, C. S.; Peterson, David L. (Technical Monitor)
1997-01-01
Simulation modeling can play a number of important roles in large-scale ecosystem studies, including synthesis of patterns and changes in carbon and nutrient cycling dynamics, scaling up to regional estimates, and formulation of testable hypotheses for process studies. Recent comparative studies have shown that ecosystem models of soil trace gas exchange with the atmosphere are evolving into several distinct simulation approaches. Different levels of detail exist among process models in the treatment of physical controls on ecosystem nutrient fluxes and organic substrate transformations leading to gas emissions. These differences are is in part from distinct objectives of scaling and extrapolation. Parameter requirements for initialization scalings, boundary conditions, and time-series driven therefore vary among ecosystem simulation models, such that the design of field experiments for integration with modeling should consider a consolidated series of measurements that will satisfy most of the various model requirements. For example, variables that provide information on soil moisture holding capacity, moisture retention characteristics, potential evapotranspiration and drainage rates, and rooting depth appear to be of the first order in model evaluation trials for tropical moist forest ecosystems. The amount and nutrient content of labile organic matter in the soil, based on accurate plant production estimates, are also key parameters that determine emission model response. Based on comparative model results, it is possible to construct a preliminary evaluation matrix along categories of key diagnostic parameters and temporal domains. Nevertheless, as large-scale studied are planned, it is notable that few existing models age designed to simulate transient states of ecosystem change, a feature which will be essential for assessment of anthropogenic disturbance on regional gas budgets, and effects of long-term climate variability on biosphere-atmosphere exchange.
A suite of diagnostics to validate and optimize the prototype ITER neutral beam injector
NASA Astrophysics Data System (ADS)
Pasqualotto, R.; Agostini, M.; Barbisan, M.; Brombin, M.; Cavazzana, R.; Croci, G.; Dalla Palma, M.; Delogu, R. S.; De Muri, M.; Muraro, A.; Peruzzo, S.; Pimazzoni, A.; Pomaro, N.; Rebai, M.; Rizzolo, A.; Sartori, E.; Serianni, G.; Spagnolo, S.; Spolaore, M.; Tardocchi, M.; Zaniol, B.; Zaupa, M.
2017-10-01
The ITER project requires additional heating provided by two neutral beam injectors using 40 A negative deuterium ions accelerated at 1 MV. As the beam requirements have never been experimentally met, a test facility is under construction at Consorzio RFX, which hosts two experiments: SPIDER, full-size 100 kV ion source prototype, and MITICA, 1 MeV full-size ITER injector prototype. Since diagnostics in ITER injectors will be mainly limited to thermocouples, due to neutron and gamma radiation and to limited access, it is crucial to thoroughly investigate and characterize in more accessible experiments the key parameters of source plasma and beam, using several complementary diagnostics assisted by modelling. In SPIDER and MITICA the ion source parameters will be measured by optical emission spectroscopy, electrostatic probes, cavity ring down spectroscopy for H^- density and laser absorption spectroscopy for cesium density. Measurements over multiple lines-of-sight will provide the spatial distribution of the parameters over the source extension. The beam profile uniformity and its divergence are studied with beam emission spectroscopy, complemented by visible tomography and neutron imaging, which are novel techniques, while an instrumented calorimeter based on custom unidirectional carbon fiber composite tiles observed by infrared cameras will measure the beam footprint on short pulses with the highest spatial resolution. All heated components will be monitored with thermocouples: as these will likely be the only measurements available in ITER injectors, their capabilities will be investigated by comparison with other techniques. SPIDER and MITICA diagnostics are described in the present paper with a focus on their rationale, key solutions and most original and effective implementations.
NASA Astrophysics Data System (ADS)
Fu, Jinglin; Yang, Yuhe Renee; Johnson-Buck, Alexander; Liu, Minghui; Liu, Yan; Walter, Nils G.; Woodbury, Neal W.; Yan, Hao
2014-07-01
Swinging arms are a key functional component of multistep catalytic transformations in many naturally occurring multi-enzyme complexes. This arm is typically a prosthetic chemical group that is covalently attached to the enzyme complex via a flexible linker, allowing the direct transfer of substrate molecules between multiple active sites within the complex. Mimicking this method of substrate channelling outside the cellular environment requires precise control over the spatial parameters of the individual components within the assembled complex. DNA nanostructures can be used to organize functional molecules with nanoscale precision and can also provide nanomechanical control. Until now, protein-DNA assemblies have been used to organize cascades of enzymatic reactions by controlling the relative distance and orientation of enzymatic components or by facilitating the interface between enzymes/cofactors and electrode surfaces. Here, we show that a DNA nanostructure can be used to create a multi-enzyme complex in which an artificial swinging arm facilitates hydride transfer between two coupled dehydrogenases. By exploiting the programmability of DNA nanostructures, key parameters including position, stoichiometry and inter-enzyme distance can be manipulated for optimal activity.
Fu, Jinglin; Yang, Yuhe Renee; Johnson-Buck, Alexander; Liu, Minghui; Liu, Yan; Walter, Nils G; Woodbury, Neal W; Yan, Hao
2014-07-01
Swinging arms are a key functional component of multistep catalytic transformations in many naturally occurring multi-enzyme complexes. This arm is typically a prosthetic chemical group that is covalently attached to the enzyme complex via a flexible linker, allowing the direct transfer of substrate molecules between multiple active sites within the complex. Mimicking this method of substrate channelling outside the cellular environment requires precise control over the spatial parameters of the individual components within the assembled complex. DNA nanostructures can be used to organize functional molecules with nanoscale precision and can also provide nanomechanical control. Until now, protein-DNA assemblies have been used to organize cascades of enzymatic reactions by controlling the relative distance and orientation of enzymatic components or by facilitating the interface between enzymes/cofactors and electrode surfaces. Here, we show that a DNA nanostructure can be used to create a multi-enzyme complex in which an artificial swinging arm facilitates hydride transfer between two coupled dehydrogenases. By exploiting the programmability of DNA nanostructures, key parameters including position, stoichiometry and inter-enzyme distance can be manipulated for optimal activity.
NASA Astrophysics Data System (ADS)
Wiegart, L.; Rakitin, M.; Fluerasu, A.; Chubar, O.
2017-08-01
We present the application of fully- and partially-coherent synchrotron radiation wavefront propagation simulation functions, implemented in the "Synchrotron Radiation Workshop" computer code, to create a `virtual beamline' mimicking the Coherent Hard X-ray scattering beamline at NSLS-II. The beamline simulation includes all optical beamline components, such as the insertion device, mirror with metrology data, slits, double crystal monochromator and refractive focusing elements (compound refractive lenses and kinoform lenses). A feature of this beamline is the exploitation of X-ray beam coherence, boosted by the low-emittance NSLS-II storage-ring, for techniques such as X-ray Photon Correlation Spectroscopy or Coherent Diffraction Imaging. The key performance parameters are the degree of Xray beam coherence and photon flux, and the trade-off between them needs to guide the beamline settings for specific experimental requirements. Simulations of key performance parameters are compared to measurements obtained during beamline commissioning, and include the spectral flux of the undulator source, the degree of transverse coherence as well as focal spot sizes.
Salzwedel, Annett; Nosper, Manfred; Röhrig, Bernd; Linck-Eleftheriadis, Sigrid; Strandt, Gert; Völler, Heinz
2014-02-01
Outcome quality management requires the consecutive registration of defined variables. The aim was to identify relevant parameters in order to objectively assess the in-patient rehabilitation outcome. From February 2009 to June 2010 1253 patients (70.9 ± 7.0 years, 78.1% men) at 12 rehabilitation clinics were enrolled. Items concerning sociodemographic data, the impairment group (surgery, conservative/interventional treatment), cardiovascular risk factors, structural and functional parameters and subjective health were tested in respect of their measurability, sensitivity to change and their propensity to be influenced by rehabilitation. The majority of patients (61.1%) were referred for rehabilitation after cardiac surgery, 38.9% after conservative or interventional treatment for an acute coronary syndrome. Functionally relevant comorbidities were seen in 49.2% (diabetes mellitus, stroke, peripheral artery disease, chronic obstructive lung disease). In three key areas 13 parameters were identified as being sensitive to change and subject to modification by rehabilitation: cardiovascular risk factors (blood pressure, low-density lipoprotein cholesterol, triglycerides), exercise capacity (resting heart rate, maximal exercise capacity, maximal walking distance, heart failure, angina pectoris) and subjective health (IRES-24 (indicators of rehabilitation status): pain, somatic health, psychological well-being and depression as well as anxiety on the Hospital Anxiety and Depression Scale). The outcome of in-patient rehabilitation in elderly patients can be comprehensively assessed by the identification of appropriate key areas, that is, cardiovascular risk factors, exercise capacity and subjective health. This may well serve as a benchmark for internal and external quality management.
Landsat-5 bumper-mode geometric correction
Storey, James C.; Choate, Michael J.
2004-01-01
The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.
Perceiving while producing: Modeling the dynamics of phonological planning
Roon, Kevin D.; Gafos, Adamantios I.
2016-01-01
We offer a dynamical model of phonological planning that provides a formal instantiation of how the speech production and perception systems interact during online processing. The model is developed on the basis of evidence from an experimental task that requires concurrent use of both systems, the so-called response-distractor task in which speakers hear distractor syllables while they are preparing to produce required responses. The model formalizes how ongoing response planning is affected by perception and accounts for a range of results reported across previous studies. It does so by explicitly addressing the setting of parameter values in representations. The key unit of the model is that of the dynamic field, a distribution of activation over the range of values associated with each representational parameter. The setting of parameter values takes place by the attainment of a stable distribution of activation over the entire field, stable in the sense that it persists even after the response cue in the above experiments has been removed. This and other properties of representations that have been taken as axiomatic in previous work are derived by the dynamics of the proposed model. PMID:27440947
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Percha, Bethany; Newman, M. E. J.; Foxman, Betsy
2012-01-01
Group B Streptococcus (GBS) remains a major cause of neonatal sepsis and is an emerging cause of invasive bacterial infections. The 9 known serotypes vary in virulence, and there is little cross-immunity. Key parameters for planning an effective vaccination strategy, such as average length of immunity and transmission probabilities by serotype, are unknown. We simulated GBS spread in a population using a computational model with parameters derived from studies of GBS sexual transmission in a college dormitory. Here we provide estimates of the duration of immunity relative to the transmission probabilities for the 3 GBS serotypes most associated with invasive disease: Ia, III, and V. We also place upper limits on the durations of immunity for serotype Ia (570 days), III (1125 days) and V (260 days). Better transmission estimates are required to establish the epidemiological parameters of GBS infection and determine the best vaccination strategies to prevent GBS disease. PMID:21605704
Evaluation of performance of select fusion experiments and projected reactors
NASA Technical Reports Server (NTRS)
Miley, G. H.
1978-01-01
The performance of NASA Lewis fusion experiments (SUMMA and Bumpy Torus) is compared with other experiments and that necessary for a power reactor. Key parameters cited are gain (fusion power/input power) and the time average fusion power, both of which may be more significant for real fusion reactors than the commonly used Lawson parameter. The NASA devices are over 10 orders of magnitude below the required powerplant values in both gain and time average power. The best experiments elsewhere are also as much as 4 to 5 orders of magnitude low. However, the NASA experiments compare favorably with other alternate approaches that have received less funding than the mainline experiments. The steady-state character and efficiency of plasma heating are strong advantages of the NASA approach. The problem, though, is to move ahead to experiments of sufficient size to advance in gain and average power parameters.
Removable polytetrafluoroethylene template based epitaxy of ferroelectric copolymer thin films
NASA Astrophysics Data System (ADS)
Xia, Wei; Chen, Qiusong; Zhang, Jian; Wang, Hui; Cheng, Qian; Jiang, Yulong; Zhu, Guodong
2018-04-01
In recent years ferroelectric polymers have shown their great potentials in organic and flexible electronics. To meet the requirements of high-performance and low energy consumption of novel electronic devices and systems, structural and electrical properties of ferroelectric polymer thin films are expected to be further optimized. One possible way is to realize epitaxial growth of ferroelectric thin films via removable high-ordered polytetrafluoroethylene (PTFE) templates. Here two key parameters in epitaxy process, annealing temperature and applied pressure, are systematically studied and thus optimized through structural and electrical measurements of ferroelectric copolymer thin films. Experimental results indicate that controlled epitaxial growth is realized via suitable combination of both parameters. Annealing temperature above the melting point of ferroelectric copolymer films is required, and simultaneously moderate pressure (around 2.0 MPa here) should be applied. Over-low pressure (around 1.0 MPa here) usually results in the failure of epitaxy process, while over-high pressure (around 3.0 MPa here) often results in residual of PTFE templates on ferroelectric thin films.
NASA Astrophysics Data System (ADS)
Nevinitsa, V. A.; Dudnikov, A. A.; Blandinskiy, V. Yu.; Balanin, A. L.; Alekseev, P. N.; Titarenko, Yu. E.; Batyaev, V. F.; Pavlov, K. V.; Titarenko, A. Yu.
2015-12-01
A subcritical molten salt reactor with an external neutron source is studied computationally as a facility for incineration and transmutation of minor actinides from spent nuclear fuel of reactors of VVER-1000 type and for producing 233U from 232Th. The reactor configuration is chosen, the requirements to be imposed on the external neutron source are formulated, and the equilibrium isotopic composition of heavy nuclides and the key parameters of the fuel cycle are calculated.
MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS*
CHAHINE, Georges L.; HSIAO, Chao-Tsung
2012-01-01
Controlling microbubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge, which can be achieved only through a combination of experimental and numerical/analytical techniques. The present communication presents a multi-physics approach to study the dynamics combining viscous- in-viscid effects, liquid and structure dynamics, and multi bubble interaction. While complex numerical tools are developed and used, the study aims at identifying the key parameters influencing the dynamics, which need to be included in simpler models. PMID:22833696
Lange, Marcos C; Braga, Gabriel Pereira; Nóvak, Edison M; Harger, Rodrigo; Felippe, Maria Justina Dalla Bernardina; Canever, Mariana; Dall'Asta, Isabella; Rauen, Jordana; Bazan, Rodrigo; Zetola, Viviane
2017-06-01
All 16 KPIs were analyzed, including the percentage of patients admitted to the stroke unit, venous thromboembolism prophylaxis in the first 48 hours after admission, pneumonia and hospital mortality due to stroke, and hospital discharge on antithrombotic therapy in patients without cardioembolic mechanism. Both centers admitted over 80% of the patients in their stroke unit. The incidence of venous thromboembolism prophylaxis was > 85%, that of in-hospital pneumonia was < 13%, the hospital mortality for stroke was < 15%, and the hospital discharge on antithrombotic therapy was > 70%. Our results suggest using the parameters of all of the 16 KPIs required by the Ministry of Health of Brazil, and the present results for the two stroke units for future benchmarking.
Land use investigations in the central valley and central coastal test sites, California
NASA Technical Reports Server (NTRS)
Estes, J. E.
1973-01-01
The Geography Remote Sensing Unit (GRSU) at the University of California, Santa Barbara is responsible for investigations with ERTS-1 data in the Central Coastal Zone and West Side of the San Joaquin Valley. The nature of investigative effort involves the inventory, monitoring, and assessment of the natural and cultural resources of the two areas. Land use, agriculture, vegetation, landforms, geology, and hydrology are the principal subjects for attention. These parameters are the key indicators of the dynamically changing character of the areas. Monitoring of these parameters with ERTS-1 data will provide the techniques and methodologies required to generate the information needed by federal, state, county, and local agencies to assess change-related phenomena and plan for management and development.
Capsule Performance Optimization in the National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O L; MacGowan, B J; Haan, S W
2009-10-13
A capsule performance optimization campaign will be conducted at the National Ignition Facility to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting themore » key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less
Capsule performance optimization in the national ignition campaign
NASA Astrophysics Data System (ADS)
Landen, O. L.; MacGowan, B. J.; Haan, S. W.; Edwards, J.
2010-08-01
A capsule performance optimization campaign will be conducted at the National Ignition Facility [1] to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.
NASA Astrophysics Data System (ADS)
Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie
2014-12-01
To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.
Tracer SWIW tests in propped and un-propped fractures: parameter sensitivity issues, revisited
NASA Astrophysics Data System (ADS)
Ghergut, Julia; Behrens, Horst; Sauter, Martin
2017-04-01
Single-well injection-withdrawal (SWIW) or 'push-then-pull' tracer methods appear attractive for a number of reasons: less uncertainty on design and dimensioning, and lower tracer quantities required than for inter-well tests; stronger tracer signals, enabling easier and cheaper metering, and shorter metering duration required, reaching higher tracer mass recovery than in inter-well tests; last not least: no need for a second well. However, SWIW tracer signal inversion faces a major issue: the 'push-then-pull' design weakens the correlation between tracer residence times and georeservoir transport parameters, inducing insensitivity or ambiguity of tracer signal inversion w. r. to some of those georeservoir parameters that are supposed to be the target of tracer tests par excellence: pore velocity, transport-effective porosity, fracture or fissure aperture and spacing or density (where applicable), fluid/solid or fluid/fluid phase interface density. Hydraulic methods cannot measure the transport-effective values of such parameters, because pressure signals correlate neither with fluid motion, nor with material fluxes through (fluid-rock, or fluid-fluid) phase interfaces. The notorious ambiguity impeding parameter inversion from SWIW test signals has nourished several 'modeling attitudes': (i) regard dispersion as the key process encompassing whatever superposition of underlying transport phenomena, and seek a statistical description of flow-path collectives enabling to characterize dispersion independently of any other transport parameter, as proposed by Gouze et al. (2008), with Hansen et al. (2016) offering a comprehensive analysis of the various ways dispersion model assumptions interfere with parameter inversion from SWIW tests; (ii) regard diffusion as the key process, and seek for a large-time, asymptotically advection-independent regime in the measured tracer signals (Haggerty et al. 2001), enabling a dispersion-independent characterization of multiple-scale diffusion; (iii) attempt to determine both advective and non-advective transport parameters from one and the same conservative-tracer signal (relying on 'third-party' knowledge), or from twin signals of a so-called 'dual' tracer pair, e. g.: using tracers with contrasting reactivity and partitioning behavior to determine residual saturation in depleted oilfields (Tomich et al. 1973), or to determine advective parameters (Ghergut et al. 2014); using early-time signals of conservative and sorptive tracers for propped-fracture characterization (Karmakar et al. 2015); using mid-time signals of conservative tracers for a reservoir-borne inflow profiling in multi-frac systems (Ghergut et al. 2016), etc. The poster describes new uses of type-(iii) techniques for the specific purposes of shale-gas reservoir characterization, productivity monitoring, diagnostics and engineering of 're-frac' treatments, based on parameter sensitivity findings from German BMWi research project "TRENDS" (Federal Ministry for Economic Affairs and Energy, FKZ 0325515) and from the EU-H2020 project "FracRisk" (grant no. 640979).
NASA Astrophysics Data System (ADS)
Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin
2017-03-01
After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.
Ghunmi, Lina Abu; Zeeman, Grietje; van Lier, Jules; Fayyed, Manar
2008-01-01
The objective of this work is to assess the potentials and requirements for grey water reuse in Jordan. The results revealed that urban, rural and dormitory grey water production rate and concentration of TS, BOD(5), COD and pathogens varied between 18-66 L cap(-1)d(-1), 848-1,919, 200-1,056, and 560-2,568 mg L(-1) and 6.9E2-2.7E5 CFU mL(-1), respectively. The grey water compromises 64 to 85% of the total water flow in the rural and urban areas. Storing grey water is inevitable to meet reuse requirements in terms of volume and timing. All the studied grey waters need treatment, in terms of solids, BOD(5), COD and pathogens, before storage and reuse. Storage and physical treatment, as a pretreatment step should be avoided, since it produces unstable effluents and non-stabilized sludge. However, extensive biological treatment can combine storage and physical treatments. Furthermore, a batch-fed biological treatment system combining anaerobic and aerobic processes copes with the fluctuations in the hydrographs and pollutographs as well as the present nutrients. The inorganic content of grey water in Jordan is about drinking water quality and does not need treatment. Moreover, the grey water SAR values were 3-7, revealing that the concentrations of monovalent and divalent cations comply with agricultural demand in Jordan. The observed patterns in the hydrographs and pollutographs showed that the hydraulic load could be used for the design of both physical and biological treatment units for dormitories and hotels. For family houses the hydraulic load was identified as the key design parameter for physical treatment units and the organic load is the key design parameter for biological treatment units. Copyright IWA Publishing 2008.
Biomarkers for optimal requirements of amino acids by animals and humans.
Lin, Gang; Liu, Chuang; Wang, Taiji; Wu, Guoyao; Qiao, Shiyan; Li, Defa; Wang, Junjun
2011-06-01
Amino acids are building blocks of proteins and key regulators of nutrient metabolism in cells. However, excessive intake of amino acids can be toxic to the body. Therefore, it is important to precisely determine amino acid requirements by organisms. To date, none of the methods is completely satisfactory to generate comprehensive data on amino acid requirements of animals or humans. Because of many influencing factors, amino acid requirements remain a complex and controversial issue in nutrition that warrants further investigations. Benefiting from the rapid advances in the emerging omics technologies and bioinformatics, biomarker discovery shows great potential in obtaining in-depth understanding of regulatory networks in protein metabolism. This review summarizes the current approaches to assess amino acid requirements of animals and humans, as well as the recent development of biomarkers as potentially functional parameters for recommending requirements of individual amino acids in health and disease. Identification of biomarkers in plasma or serum, which is a noninvasive approach, holds great promise in rapidly advancing the field of protein nutrition.
Development of Airport Surface Required Navigation Performance (RNP)
NASA Technical Reports Server (NTRS)
Cassell, Rick; Smith, Alex; Hicok, Dan
1999-01-01
The U.S. and international aviation communities have adopted the Required Navigation Performance (RNP) process for defining aircraft performance when operating the en-route, approach and landing phases of flight. RNP consists primarily of the following key parameters - accuracy, integrity, continuity, and availability. The processes and analytical techniques employed to define en-route, approach and landing RNP have been applied in the development of RNP for the airport surface. To validate the proposed RNP requirements several methods were used. Operational and flight demonstration data were analyzed for conformance with proposed requirements, as were several aircraft flight simulation studies. The pilot failure risk component was analyzed through several hypothetical scenarios. Additional simulator studies are recommended to better quantify crew reactions to failures as well as additional simulator and field testing to validate achieved accuracy performance, This research was performed in support of the NASA Low Visibility Landing and Surface Operations Programs.
Securing resource constraints embedded devices using elliptic curve cryptography
NASA Astrophysics Data System (ADS)
Tam, Tony; Alfasi, Mohamed; Mozumdar, Mohammad
2014-06-01
The use of smart embedded device has been growing rapidly in recent time because of miniaturization of sensors and platforms. Securing data from these embedded devices is now become one of the core challenges both in industry and research community. Being embedded, these devices have tight constraints on resources such as power, computation, memory, etc. Hence it is very difficult to implement traditional Public Key Cryptography (PKC) into these resource constrained embedded devices. Moreover, most of the public key security protocols requires both public and private key to be generated together. In contrast with this, Identity Based Encryption (IBE), a public key cryptography protocol, allows a public key to be generated from an arbitrary string and the corresponding private key to be generated later on demand. While IBE has been actively studied and widely applied in cryptography research, conventional IBE primitives are also computationally demanding and cannot be efficiently implemented on embedded system. Simplified version of the identity based encryption has proven its competence in being robust and also satisfies tight budget of the embedded platform. In this paper, we describe the choice of several parameters for implementing lightweight IBE in resource constrained embedded sensor nodes. Our implementation of IBE is built using elliptic curve cryptography (ECC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawton, Craig R.; Welch, Kimberly M.; Kerper, Jessica
2010-06-01
The Department of Defense's (DoD) Energy Posture identified dependence of the US Military on fossil fuel energy as a key issue facing the military. Inefficient energy consumption leads to increased costs, effects operational performance and warfighter protection through large and vulnerable logistics support infrastructures. Military's use of energy is a critical national security problem. DoD's proposed metrics Fully Burdened Cost of Fuel and Energy Efficiency Key Performance Parameter (FBCF and Energy KPP) are a positive step to force energy use accountability onto Military programs. The ability to measure impacts of sustainment are required to fully measure Energy KPP. Sandia's workmore » with Army demonstrates the capability to measure performance which includes energy constraint.« less
Experimental Implementation of a Quantum Optical State Comparison Amplifier
NASA Astrophysics Data System (ADS)
Donaldson, Ross J.; Collins, Robert J.; Eleftheriadou, Electra; Barnett, Stephen M.; Jeffers, John; Buller, Gerald S.
2015-03-01
We present an experimental demonstration of a practical nondeterministic quantum optical amplification scheme that employs two mature technologies, state comparison and photon subtraction, to achieve amplification of known sets of coherent states with high fidelity. The amplifier uses coherent states as a resource rather than single photons, which allows for a relatively simple light source, such as a diode laser, providing an increased rate of amplification. The amplifier is not restricted to low amplitude states. With respect to the two key parameters, fidelity and the amplified state production rate, we demonstrate significant improvements over previous experimental implementations, without the requirement of complex photonic components. Such a system may form the basis of trusted quantum repeaters in nonentanglement-based quantum communications systems with known phase alphabets, such as quantum key distribution or quantum digital signatures.
International Space Station Major Constituent Analyzer On-Orbit Performance
NASA Technical Reports Server (NTRS)
Gardner, Ben D.; Erwin, Phillip M.; Thoresen, Souzan; Granahan, John; Matty, Chris
2012-01-01
The Major Constituent Analyzer is a mass spectrometer based system that measures the major atmospheric constituents on the International Space Station. A number of limited-life components require periodic changeout, including the ORU 02 analyzer and the ORU 08 Verification Gas Assembly. Over the past two years, two ORU 02 analyzer assemblies have operated nominally while two others have experienced premature on-orbit failures. These failures as well as nominal performances demonstrate that ORU 02 performance remains a key determinant of MCA performance and logistical support. It can be shown that monitoring several key parameters can maximize the capacity to monitor ORU health and properly anticipate end of life. Improvements to ion pump operation and ion source tuning are expected to improve lifetime performance of the current ORU 02 design.
NASA Astrophysics Data System (ADS)
Destefanis, Stefano; Tracino, Emanuele; Giraudo, Martina
2014-06-01
During a mission involving a spacecraft using nuclear power sources (NPS), the consequences to the population induced by an accident has to be taken into account carefully.Part of the study (led by AREVA, with TAS-I as one of the involved parties) was devoted to "Worst Case Scenario Consolidation". In particular, one of the activities carried out by TAS-I had the aim of characterizing the accidental environment (explosion on launch pad or during launch) and consolidate the requirements given as input in the study. The resulting requirements became inputs for Nuclear Power Source container design.To do so, TAS-I did first an overview of the available technical literature (mostly developed in the frame of NASA Mercury / Apollo program), to identify the key parameters to be used for analytical assessment (blast pressure wave, fragments size, speed and distribution, TNT equivalent of liquid propellant).Then, a simplified Radioss model was setup, to verify both the cards needed for blast / fragment impact analysis and the consistency between preliminary results and available technical literature (Radioss is commonly used to design mine - resistant vehicles, by simulating the effect of blasts onto structural elements, and it is used in TAS-I for several types of analysis, including land impact, water impact and fluid - structure interaction).The obtained results (albeit produced by a very simplified model) are encouraging, showing that the analytical tool and the selected key parameters represent a step in the right direction.
Water use at pulverized coal power plants with postcombustion carbon capture and storage.
Zhai, Haibo; Rubin, Edward S; Versteeg, Peter L
2011-03-15
Coal-fired power plants account for nearly 50% of U.S. electricity supply and about a third of U.S. emissions of CO(2), the major greenhouse gas (GHG) associated with global climate change. Thermal power plants also account for 39% of all freshwater withdrawals in the U.S. To reduce GHG emissions from coal-fired plants, postcombustion carbon capture and storage (CCS) systems are receiving considerable attention. Current commercial amine-based capture systems require water for cooling and other operations that add to power plant water requirements. This paper characterizes and quantifies water use at coal-burning power plants with and without CCS and investigates key parameters that influence water consumption. Analytical models are presented to quantify water use for major unit operations. Case study results show that, for power plants with conventional wet cooling towers, approximately 80% of total plant water withdrawals and 86% of plant water consumption is for cooling. The addition of an amine-based CCS system would approximately double the consumptive water use of the plant. Replacing wet towers with air-cooled condensers for dry cooling would reduce plant water use by about 80% (without CCS) to about 40% (with CCS). However, the cooling system capital cost would approximately triple, although costs are highly dependent on site-specific characteristics. The potential for water use reductions with CCS is explored via sensitivity analyses of plant efficiency and other key design parameters that affect water resource management for the electric power industry.
Wnt signalling pathway parameters for mammalian cells.
Tan, Chin Wee; Gardiner, Bruce S; Hirokawa, Yumiko; Layton, Meredith J; Smith, David W; Burgess, Antony W
2012-01-01
Wnt/β-catenin signalling regulates cell fate, survival, proliferation and differentiation at many stages of mammalian development and pathology. Mutations of two key proteins in the pathway, APC and β-catenin, have been implicated in a range of cancers, including colorectal cancer. Activation of Wnt signalling has been associated with the stabilization and nuclear accumulation of β-catenin and consequential up-regulation of β-catenin/TCF gene transcription. In 2003, Lee et al. constructed a computational model of Wnt signalling supported by experimental data from analysis of time-dependent concentration of Wnt signalling proteins in Xenopus egg extracts. Subsequent studies have used the Xenopus quantitative data to infer Wnt pathway dynamics in other systems. As a basis for understanding Wnt signalling in mammalian cells, a confocal live cell imaging measurement technique is developed to measure the cell and nuclear volumes of MDCK, HEK293T cells and 3 human colorectal cancer cell lines and the concentrations of Wnt signalling proteins β-catenin, Axin, APC, GSK3β and E-cadherin. These parameters provide the basis for formulating Wnt signalling models for kidney/intestinal epithelial mammalian cells. There are significant differences in concentrations of key proteins between Xenopus extracts and mammalian whole cell lysates. Higher concentrations of Axin and lower concentrations of APC are present in mammalian cells. Axin concentrations are greater than APC in kidney epithelial cells, whereas in intestinal epithelial cells the APC concentration is higher than Axin. Computational simulations based on Lee's model, with this new data, suggest a need for a recalibration of the model.A quantitative understanding of Wnt signalling in mammalian cells, in particular human colorectal cancers requires a detailed understanding of the concentrations of key protein complexes over time. Simulations of Wnt signalling in mammalian cells can be initiated with the parameters measured in this report.
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
High-precision relative position and attitude measurement for on-orbit maintenance of spacecraft
NASA Astrophysics Data System (ADS)
Zhu, Bing; Chen, Feng; Li, Dongdong; Wang, Ying
2018-02-01
In order to realize long-term on-orbit running of satellites, space stations, etc spacecrafts, in addition to the long life design of devices, The life of the spacecraft can also be extended by the on-orbit servicing and maintenance. Therefore, it is necessary to keep precise and detailed maintenance of key components. In this paper, a high-precision relative position and attitude measurement method used in the maintenance of key components is given. This method mainly considers the design of the passive cooperative marker, light-emitting device and high resolution camera in the presence of spatial stray light and noise. By using a series of algorithms, such as background elimination, feature extraction, position and attitude calculation, and so on, the high precision relative pose parameters as the input to the control system between key operation parts and maintenance equipment are obtained. The simulation results show that the algorithm is accurate and effective, satisfying the requirements of the precision operation technique.
Simultaneous classical communication and quantum key distribution using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing
Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less
Cheng, Jingchi; Tang, Ming; Fu, Songnian; Shum, Perry Ping; Liu, Deming
2013-04-01
We show for the first time, to the best of our knowledge, that, in a coherent communication system that employs a phase-shift-keying signal and Raman amplification, besides the pump relative intensity noise (RIN) transfer to the amplitude, the signal's phase will also be affected by pump RIN through the pump-signal cross-phase modulation. Although the average pump power induced linear phase change can be compensated for by the phase-correction algorithm, a relative phase noise (RPN) parameter has been found to characterize pump RIN induced stochastic phase noise. This extra phase noise brings non-negligible system impairments in terms of the Q-factor penalty. The calculation shows that copumping leads to much more stringent requirements to pump RIN, and relatively larger fiber dispersion helps to suppress the RPN induced impairment. A higher-order phase-shift keying (PSK) signal is less tolerant to noise than a lower-order PSK.
Simultaneous classical communication and quantum key distribution using continuous variables
Qi, Bing
2016-10-26
Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
Development and deployment of a water-crop-nutrient simulation model embedded in a web application
NASA Astrophysics Data System (ADS)
Langella, Giuliano; Basile, Angelo; Coppola, Antonio; Manna, Piero; Orefice, Nadia; Terribile, Fabio
2016-04-01
It is long time by now that scientific research on environmental and agricultural issues spent large effort in the development and application of models for prediction and simulation in spatial and temporal domains. This is fulfilled by studying and observing natural processes (e.g. rainfall, water and chemicals transport in soils, crop growth) whose spatiotemporal behavior can be reproduced for instance to predict irrigation and fertilizer requirements and yield quantities/qualities. In this work a mechanistic model to simulate water flow and solute transport in the soil-plant-atmosphere continuum is presented. This desktop computer program was written according to the specific requirement of developing web applications. The model is capable to solve the following issues all together: (a) water balance and (b) solute transport; (c) crop modelling; (d) GIS-interoperability; (e) embedability in web-based geospatial Decision Support Systems (DSS); (f) adaptability at different scales of application; and (g) ease of code modification. We maintained the desktop characteristic in order to further develop (e.g. integrate novel features) and run the key program modules for testing and validation purporses, but we also developed a middleware component to allow the model run the simulations directly over the web, without software to be installed. The GIS capabilities allows the web application to make simulations in a user-defined region of interest (delimited over a geographical map) without the need to specify the proper combination of model parameters. It is possible since the geospatial database collects information on pedology, climate, crop parameters and soil hydraulic characteristics. Pedological attributes include the spatial distribution of key soil data such as soil profile horizons and texture. Further, hydrological parameters are selected according to the knowledge about the spatial distribution of soils. The availability and definition in the geospatial domain of these attributes allow the simulation outputs at a different spatial scale. Two different applications were implemented using the same framework but with different configurations of the software pieces making the physically based modelling chain: an irrigation tool simulating water requirements and their dates and a fertilization tool for optimizing in particular mineral nitrogen adds.
On the security of semi-device-independent QKD protocols
NASA Astrophysics Data System (ADS)
Chaturvedi, Anubhav; Ray, Maharshi; Veynar, Ryszard; Pawłowski, Marcin
2018-06-01
While fully device-independent security in (BB84-like) prepare-and-measure quantum key distribution (QKD) is impossible, it can be guaranteed against individual attacks in a semi-device-independent (SDI) scenario, wherein no assumptions are made on the characteristics of the hardware used except for an upper bound on the dimension of the communicated system. Studying security under such minimal assumptions is especially relevant in the context of the recent quantum hacking attacks wherein the eavesdroppers can not only construct the devices used by the communicating parties but are also able to remotely alter their behavior. In this work, we study the security of a SDIQKD protocol based on the prepare-and-measure quantum implementation of a well-known cryptographic primitive, the random access code (RAC). We consider imperfect detectors and establish the critical values of the security parameters (the observed success probability of the RAC and the detection efficiency) required for guaranteeing security against eavesdroppers with and without quantum memory. Furthermore, we suggest a minimal characterization of the preparation device in order to lower the requirements for establishing a secure key.
Selected physical properties of various diesel blends
NASA Astrophysics Data System (ADS)
Hlaváčová, Zuzana; Božiková, Monika; Hlaváč, Peter; Regrut, Tomáš; Ardonová, Veronika
2018-01-01
The quality determination of biofuels requires identifying the chemical and physical parameters. The key physical parameters are rheological, thermal and electrical properties. In our study, we investigated samples of diesel blends with rape-seed methyl esters content in the range from 3 to 100%. In these, we measured basic thermophysical properties, including thermal conductivity and thermal diffusivity, using two different transient methods - the hot-wire method and the dynamic plane source. Every thermophysical parameter was measured 100 times using both methods for all samples. Dynamic viscosity was measured during the heating process under the temperature range 20-80°C. A digital rotational viscometer (Brookfield DV 2T) was used for dynamic viscosity detection. Electrical conductivity was measured using digital conductivity meter (Model 1152) in a temperature range from -5 to 30°C. The highest values of thermal parameters were reached in the diesel sample with the highest biofuel content. The dynamic viscosity of samples increased with higher concentration of bio-component rapeseed methyl esters. The electrical conductivity of blends also increased with rapeseed methyl esters content.
CO2 laser ranging systems study
NASA Technical Reports Server (NTRS)
Filippi, C. A.
1975-01-01
The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nevinitsa, V. A., E-mail: Neviniza-VA@nrcki.ru; Dudnikov, A. A.; Blandinskiy, V. Yu.
2015-12-15
A subcritical molten salt reactor with an external neutron source is studied computationally as a facility for incineration and transmutation of minor actinides from spent nuclear fuel of reactors of VVER-1000 type and for producing {sup 233}U from {sup 232}Th. The reactor configuration is chosen, the requirements to be imposed on the external neutron source are formulated, and the equilibrium isotopic composition of heavy nuclides and the key parameters of the fuel cycle are calculated.
Multifunctional biocompatible coatings on magnetic nanoparticles
NASA Astrophysics Data System (ADS)
Bychkova, A. V.; Sorokina, O. N.; Rosenfeld, M. A.; Kovarski, A. L.
2012-11-01
Methods for coating formation on magnetic nanoparticles used in biology and medicine are considered. Key requirements to the coatings are formulated, namely, biocompatibility, stability, the possibility of attachment of pharmaceutical agents, and the absence of toxicity. The behaviour of nanoparticle/coating nanosystems in the body including penetration through cellular membranes and the excretion rates and routes is analyzed. Parameters characterizing the magnetic properties of these systems and their magnetic controllability are described. Factors limiting the applications of magnetically controlled nanosystems for targeted drug delivery are discussed. The bibliography includes 405 references.
Engineering model for ultrafast laser microprocessing
NASA Astrophysics Data System (ADS)
Audouard, E.; Mottay, E.
2016-03-01
Ultrafast laser micro-machining relies on complex laser-matter interaction processes, leading to a virtually athermal laser ablation. The development of industrial ultrafast laser applications benefits from a better understanding of these processes. To this end, a number of sophisticated scientific models have been developed, providing valuable insights in the physics of the interaction. Yet, from an engineering point of view, they are often difficult to use, and require a number of adjustable parameters. We present a simple engineering model for ultrafast laser processing, applied in various real life applications: percussion drilling, line engraving, and non normal incidence trepanning. The model requires only two global parameters. Analytical results are derived for single pulse percussion drilling or simple pass engraving. Simple assumptions allow to predict the effect of non normal incident beams to obtain key parameters for trepanning drilling. The model is compared to experimental data on stainless steel with a wide range of laser characteristics (time duration, repetition rate, pulse energy) and machining conditions (sample or beam speed). Ablation depth and volume ablation rate are modeled for pulse durations from 100 fs to 1 ps. Trepanning time of 5.4 s with a conicity of 0.15° is obtained for a hole of 900 μm depth and 100 μm diameter.
Motor Task Variation Induces Structural Learning
Braun, Daniel A.; Aertsen, Ad; Wolpert, Daniel M.; Mehring, Carsten
2009-01-01
Summary When we have learned a motor skill, such as cycling or ice-skating, we can rapidly generalize to novel tasks, such as motorcycling or rollerblading [1–8]. Such facilitation of learning could arise through two distinct mechanisms by which the motor system might adjust its control parameters. First, fast learning could simply be a consequence of the proximity of the original and final settings of the control parameters. Second, by structural learning [9–14], the motor system could constrain the parameter adjustments to conform to the control parameters' covariance structure. Thus, facilitation of learning would rely on the novel task parameters' lying on the structure of a lower-dimensional subspace that can be explored more efficiently. To test between these two hypotheses, we exposed subjects to randomly varying visuomotor tasks of fixed structure. Although such randomly varying tasks are thought to prevent learning, we show that when subsequently presented with novel tasks, subjects exhibit three key features of structural learning: facilitated learning of tasks with the same structure, strong reduction in interference normally observed when switching between tasks that require opposite control strategies, and preferential exploration along the learned structure. These results suggest that skill generalization relies on task variation and structural learning. PMID:19217296
Motor task variation induces structural learning.
Braun, Daniel A; Aertsen, Ad; Wolpert, Daniel M; Mehring, Carsten
2009-02-24
When we have learned a motor skill, such as cycling or ice-skating, we can rapidly generalize to novel tasks, such as motorcycling or rollerblading [1-8]. Such facilitation of learning could arise through two distinct mechanisms by which the motor system might adjust its control parameters. First, fast learning could simply be a consequence of the proximity of the original and final settings of the control parameters. Second, by structural learning [9-14], the motor system could constrain the parameter adjustments to conform to the control parameters' covariance structure. Thus, facilitation of learning would rely on the novel task parameters' lying on the structure of a lower-dimensional subspace that can be explored more efficiently. To test between these two hypotheses, we exposed subjects to randomly varying visuomotor tasks of fixed structure. Although such randomly varying tasks are thought to prevent learning, we show that when subsequently presented with novel tasks, subjects exhibit three key features of structural learning: facilitated learning of tasks with the same structure, strong reduction in interference normally observed when switching between tasks that require opposite control strategies, and preferential exploration along the learned structure. These results suggest that skill generalization relies on task variation and structural learning.
The Compositional Dependence of the Microstructure and Properties of CMSX-4 Superalloys
NASA Astrophysics Data System (ADS)
Yu, Hao; Xu, Wei; Van Der Zwaag, Sybrand
2018-01-01
The degradation of creep resistance in Ni-based single-crystal superalloys is essentially ascribed to their microstructural evolution. Yet there is a lack of work that manages to predict (even qualitatively) the effect of alloying element concentrations on the rate of microstructural degradation. In this research, a computational model is presented to connect the rafting kinetics of Ni superalloys to their chemical composition by combining thermodynamics calculation and a modified microstructural model. To simulate the evolution of key microstructural parameters during creep, the isotropic coarsening rate and γ/ γ' misfit stress are defined as composition-related parameters, and the effect of service temperature, time, and applied stress are taken into consideration. Two commercial superalloys, for which the kinetics of the rafting process are selected as the reference alloys, and the corresponding microstructural parameters are simulated and compared with experimental observations reported in the literature. The results confirm that our physical model not requiring any fitting parameters manages to predict (semiquantitatively) the microstructural parameters for different service conditions, as well as the effects of alloying element concentrations. The model can contribute to the computational design of new Ni-based superalloys.
Sensitivity of projected long-term CO2 emissions across the Shared Socioeconomic Pathways
NASA Astrophysics Data System (ADS)
Marangoni, G.; Tavoni, M.; Bosetti, V.; Borgonovo, E.; Capros, P.; Fricko, O.; Gernaat, D. E. H. J.; Guivarch, C.; Havlik, P.; Huppmann, D.; Johnson, N.; Karkatsoulis, P.; Keppo, I.; Krey, V.; Ó Broin, E.; Price, J.; van Vuuren, D. P.
2017-01-01
Scenarios showing future greenhouse gas emissions are needed to estimate climate impacts and the mitigation efforts required for climate stabilization. Recently, the Shared Socioeconomic Pathways (SSPs) have been introduced to describe alternative social, economic and technical narratives, spanning a wide range of plausible futures in terms of challenges to mitigation and adaptation. Thus far the key drivers of the uncertainty in emissions projections have not been robustly disentangled. Here we assess the sensitivities of future CO2 emissions to key drivers characterizing the SSPs. We use six state-of-the-art integrated assessment models with different structural characteristics, and study the impact of five families of parameters, related to population, income, energy efficiency, fossil fuel availability, and low-carbon energy technology development. A recently developed sensitivity analysis algorithm allows us to parsimoniously compute both the direct and interaction effects of each of these drivers on cumulative emissions. The study reveals that the SSP assumptions about energy intensity and economic growth are the most important determinants of future CO2 emissions from energy combustion, both with and without a climate policy. Interaction terms between parameters are shown to be important determinants of the total sensitivities.
Dynamics of social contagions with memory of nonredundant information
NASA Astrophysics Data System (ADS)
Wang, Wei; Tang, Ming; Zhang, Hai-Feng; Lai, Ying-Cheng
2015-07-01
A key ingredient in social contagion dynamics is reinforcement, as adopting a certain social behavior requires verification of its credibility and legitimacy. Memory of nonredundant information plays an important role in reinforcement, which so far has eluded theoretical analysis. We first propose a general social contagion model with reinforcement derived from nonredundant information memory. Then, we develop a unified edge-based compartmental theory to analyze this model, and a remarkable agreement with numerics is obtained on some specific models. We use a spreading threshold model as a specific example to understand the memory effect, in which each individual adopts a social behavior only when the cumulative pieces of information that the individual received from his or her neighbors exceeds an adoption threshold. Through analysis and numerical simulations, we find that the memory characteristic markedly affects the dynamics as quantified by the final adoption size. Strikingly, we uncover a transition phenomenon in which the dependence of the final adoption size on some key parameters, such as the transmission probability, can change from being discontinuous to being continuous. The transition can be triggered by proper parameters and structural perturbations to the system, such as decreasing individuals' adoption threshold, increasing initial seed size, or enhancing the network heterogeneity.
NASA Astrophysics Data System (ADS)
Zafar, Fahad; Kalavally, Vineetha; Bakaul, Masuduzzaman; Parthiban, R.
2015-09-01
For making commercial implementation of light emitting diode (LED) based visible light communication (VLC) systems feasible, it is necessary to incorporate it with dimming schemes which will provide energy savings, moods and increase the aesthetic value of the places using this technology. There are two general methods which are used to dim LEDs commonly categorized as analog and digital dimming. Incorporating fast data transmission with these techniques is a key challenge in VLC. In this paper, digital and analog dimming for a 10 Mb/s non return to zero on-off keying (NRZ-OOK) based VLC system is experimentally investigated considering both photometric and communicative parameters. A spectrophotometer was used for photometric analysis and a line of sight (LOS) configuration in the presence of ambient light was used for analyzing communication parameters. Based on the experimental results, it was determined that digital dimming scheme is preferable for use in indoor VLC systems requiring high dimming precision and data transmission at lower brightness levels. On the other hand, analog dimming scheme is a cost effective solution for high speed systems where dimming precision is insignificant.
Early universe with modified scalar-tensor theory of gravity
NASA Astrophysics Data System (ADS)
Mandal, Ranajit; Sarkar, Chandramouli; Sanyal, Abhik Kumar
2018-05-01
Scalar-tensor theory of gravity with non-minimal coupling is a fairly good candidate for dark energy, required to explain late-time cosmic evolution. Here we study the very early stage of evolution of the universe with a modified version of the theory, which includes scalar curvature squared term. One of the key aspects of the present study is that, the quantum dynamics of the action under consideration ends up generically with de-Sitter expansion under semiclassical approximation, rather than power-law. This justifies the analysis of inflationary regime with de-Sitter expansion. The other key aspect is that, while studying gravitational perturbation, the perturbed generalized scalar field equation obtained from the perturbed action, when matched with the perturbed form of the background scalar field equation, relates the coupling parameter and the potential exactly in the same manner as the solution of classical field equations does, assuming de-Sitter expansion. The study also reveals that the quantum theory is well behaved, inflationary parameters fall well within the observational limit and quantum perturbation analysis shows that the power-spectrum does not deviate considerably from the standard one obtained from minimally coupled theory.
Study of high-speed civil transports
NASA Technical Reports Server (NTRS)
1989-01-01
A systems study to identify the economic potential for a high-speed commercial transport (HSCT) has considered technology, market characteristics, airport infrastructure, and environmental issues. Market forecasts indicate a need for HSCT service in the 2000/2010 time frame conditioned on economic viability and environmental acceptability. Design requirements focused on a 300 passenger, 3 class service, and 6500 nautical mile range based on the accelerated growth of the Pacific region. Compatibility with existing airports was an assumed requirement. Mach numbers between 2 and 25 were examined in conjunction with the appropriate propulsion systems, fuels, structural materials, and thermal management systems. Aircraft productivity was a key parameter with aircraft worth, in comparison to aircraft price, being the airline-oriented figure of merit. Aircraft screening led to determination that Mach 3.2 (TSJF) would have superior characteristics to Mach 5.0 (LNG) and the recommendation that the next generation high-speed commercial transport aircraft use a kerosene fuel. The sensitivity of aircraft performance and economics to environmental constraints (e.g., sonic boom, engine emissions, and airport/community noise) was identified together with key technologies. In all, current technology is not adequate to produce viable HSCTs for the world marketplace. Technology advancements must be accomplished to meet environmental requirements (these requirements are as yet undetermined for sonic boom and engine emissions). High priority is assigned to aircraft gross weight reduction which benefits both economics and environmental aspects. Specific technology requirements are identified and national economic benefits are projected.
NASA Astrophysics Data System (ADS)
Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor
2017-04-01
As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.
Differential detection of Gaussian MSK in a mobile radio environment
NASA Technical Reports Server (NTRS)
Simon, M. K.; Wang, C. C.
1984-01-01
Minimum shift keying with Gaussian shaped transmit pulses is a strong candidate for a modulation technique that satisfies the stringent out-of-band radiated power requirements of the mobil radio application. Numerous studies and field experiments have been conducted by the Japanese on urban and suburban mobile radio channels with systems employing Gaussian minimum-shift keying (GMSK) transmission and differentially coherent reception. A comprehensive analytical treatment is presented of the performance of such systems emphasizing the important trade-offs among the various system design parameters such as transmit and receiver filter bandwidths and detection threshold level. It is shown that two-bit differential detection of GMSK is capable of offering far superior performance to the more conventional one-bit detection method both in the presence of an additive Gaussian noise background and Rician fading.
Differential detection of Gaussian MSK in a mobile radio environment
NASA Astrophysics Data System (ADS)
Simon, M. K.; Wang, C. C.
1984-11-01
Minimum shift keying with Gaussian shaped transmit pulses is a strong candidate for a modulation technique that satisfies the stringent out-of-band radiated power requirements of the mobil radio application. Numerous studies and field experiments have been conducted by the Japanese on urban and suburban mobile radio channels with systems employing Gaussian minimum-shift keying (GMSK) transmission and differentially coherent reception. A comprehensive analytical treatment is presented of the performance of such systems emphasizing the important trade-offs among the various system design parameters such as transmit and receiver filter bandwidths and detection threshold level. It is shown that two-bit differential detection of GMSK is capable of offering far superior performance to the more conventional one-bit detection method both in the presence of an additive Gaussian noise background and Rician fading.
Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M
2009-09-01
Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.
Model reduction for experimental thermal characterization of a holding furnace
NASA Astrophysics Data System (ADS)
Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane
2017-09-01
Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts. The definition of the structure of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. Internal sensors outputs, together with this model, can be used for assessing the thermal state of the furnace through an inverse approach, for a better control. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. The internal induction heat source as well as the transient radiative transfer inside the furnace are calculated through this detailed model. A reduced lumped body model has been constructed to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm, using two synthetic temperature signals with a further validation test.
NASA Technical Reports Server (NTRS)
Tatnall, Chistopher R.
1998-01-01
The counter-rotating pair of wake vortices shed by flying aircraft can pose a threat to ensuing aircraft, particularly on landing approach. To allow adequate time for the vortices to disperse/decay, landing aircraft are required to maintain certain fixed separation distances. The Aircraft Vortex Spacing System (AVOSS), under development at NASA, is designed to prescribe safe aircraft landing approach separation distances appropriate to the ambient weather conditions. A key component of the AVOSS is a ground sensor, to ensure, safety by making wake observations to verify predicted behavior. This task requires knowledge of a flowfield strength metric which gauges the severity of disturbance an encountering aircraft could potentially experience. Several proposed strength metric concepts are defined and evaluated for various combinations of metric parameters and sensor line-of-sight elevation angles. Representative populations of generating and following aircraft types are selected, and their associated wake flowfields are modeled using various wake geometry definitions. Strength metric candidates are then rated and compared based on the correspondence of their computed values to associated aircraft response values, using basic statistical analyses.
A distributed approach to the OPF problem
NASA Astrophysics Data System (ADS)
Erseghe, Tomaso
2015-12-01
This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.
An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves
Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing
2014-01-01
Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181
Lang, Jun
2012-01-30
In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.
3D Reconstruction and Approximation of Vegetation Geometry for Modeling of Within-canopy Flows
NASA Astrophysics Data System (ADS)
Henderson, S. M.; Lynn, K.; Lienard, J.; Strigul, N.; Mullarney, J. C.; Norris, B. K.; Bryan, K. R.
2016-02-01
Aquatic vegetation can shelter coastlines from waves and currents, sometimes resulting in accretion of fine sediments. We developed a photogrammetric technique for estimating the key geometric vegetation parameters that are required for modeling of within-canopy flows. Accurate estimates of vegetation geometry and density are essential to refine hydrodynamic models, but accurate, convenient, and time-efficient methodologies for measuring complex canopy geometries have been lacking. The novel approach presented here builds on recent progress in photogrammetry and computer vision. We analyzed the geometry of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Although comparatively thin, pneumatophores are more numerous than mangrove trunks, and thus influence near bed flow and sediment transport. Quadrats (1 m2) were placed at low tide among pneumatophores. Roots were counted and measured for height and diameter. Photos were taken from multiple angles around each quadrat. Relative camera locations and orientations were estimated from key features identified in multiple images using open-source software (VisualSfM). Next, a dense 3D point cloud was produced. Finally, algorithms were developed for automated estimation of pneumatophore geometry from the 3D point cloud. We found good agreement between hand-measured and photogrammetric estimates of key geometric parameters, including mean stem diameter, total number of stems, and frontal area density. These methods can reduce time spent measuring in the field, thereby enabling future studies to refine models of water flows and sediment transport within heterogenous vegetation canopies.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Brown, Adrian P; Borgs, Christian; Randall, Sean M; Schnell, Rainer
2017-06-08
Integrating medical data using databases from different sources by record linkage is a powerful technique increasingly used in medical research. Under many jurisdictions, unique personal identifiers needed for linking the records are unavailable. Since sensitive attributes, such as names, have to be used instead, privacy regulations usually demand encrypting these identifiers. The corresponding set of techniques for privacy-preserving record linkage (PPRL) has received widespread attention. One recent method is based on Bloom filters. Due to superior resilience against cryptographic attacks, composite Bloom filters (cryptographic long-term keys, CLKs) are considered best practice for privacy in PPRL. Real-world performance of these techniques using large-scale data is unknown up to now. Using a large subset of Australian hospital admission data, we tested the performance of an innovative PPRL technique (CLKs using multibit trees) against a gold-standard derived from clear-text probabilistic record linkage. Linkage time and linkage quality (recall, precision and F-measure) were evaluated. Clear text probabilistic linkage resulted in marginally higher precision and recall than CLKs. PPRL required more computing time but 5 million records could still be de-duplicated within one day. However, the PPRL approach required fine tuning of parameters. We argue that increased privacy of PPRL comes with the price of small losses in precision and recall and a large increase in computational burden and setup time. These costs seem to be acceptable in most applied settings, but they have to be considered in the decision to apply PPRL. Further research on the optimal automatic choice of parameters is needed.
Imaging on a Shoestring: Cost-Effective Technologies for Probing Vadose Zone Transport Processes
NASA Astrophysics Data System (ADS)
Corkhill, C.; Bridge, J. W.; Barns, G.; Fraser, R.; Romero-Gonzalez, M.; Wilson, R.; Banwart, S.
2010-12-01
Key barriers to the widespread uptake of imaging technology for high spatial resolution monitoring of porous media systems are cost and accessibility. X-ray tomography, magnetic resonance imaging (MRI), gamma and neutron radiography require highly specialised equipment, controlled laboratory environments and/or access to large synchrotron facilities. Here we present results from visible light, fluorescence and autoradiographic imaging techniques developed at low cost and applied in standard analytical laboratories, adapted where necessary at minimal capital expense. UV-visible time lapse fluorescence imaging (UV-vis TLFI) in a transparent thin bed chamber enabled microspheres labelled with fluorescent dye and a conservative fluorophore solute (disodium fluorescein) to be measured simultaneously in saturated, partially-saturated and actively draining quartz sand to elucidate empirical values for colloid transport and deposition parameters distributed throughout the flow field, independently of theoretical approximations. Key results include the first experimental quantification of the effects of ionic strength and air-water interfacial area on colloid deposition above a capillary fringe, and the first direct observations of particle mobilisation and redeposition by moving saturation gradients during drainage. UV-vis imaging was also used to study biodegradation and reactive transport in a variety of saturated conditions, applying fluorescence as a probe for oxygen and nitrate concentration gradients, pH, solute transport parameters, reduction of uranium, and mapping of two-dimensional flow fields around a model dipole flow borehole system to validate numerical models. Costs are low: LED excitation sources (< US 50), flow chambers (US 200) and detectors (although a complete scientific-grade CCD set-up costs around US$ 8000, robust datasets can be obtained using a commercial digital SLR camera) mean that set-ups can be flexible to meet changing experimental requirements. The critical limitations of UV-vis fluorescence imaging are the need for reliable fluorescent probes suited to the experimental objective, and the reliance on thin-bed (2D) transparent porous media. Autoradiographic techniques address some of these limitations permit imaging of key biogeochemical processes in opaque media using radioactive probes, without the need for specialised radiation sources. We present initial calibration data for the use of autoradiography to monitor transport parameters for radionuclides (99-technetium), and a novel application of a radioactive salt tracer as a probe for pore water content, in model porous media systems.
Capsule performance optimization in the National Ignition Campaigna)
NASA Astrophysics Data System (ADS)
Landen, O. L.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Glenzer, S. H.; Hamza, A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kirkwood, R. K.; Kyrala, G. A.; Michel, P.; Milovich, J.; Munro, D. H.; Nikroo, A.; Olson, R. E.; Robey, H. F.; Spears, B. K.; Thomas, C. A.; Weber, S. V.; Wilson, D. C.; Marinak, M. M.; Suter, L. J.; Hammel, B. A.; Meyerhofer, D. D.; Atherton, J.; Edwards, J.; Haan, S. W.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.
2010-05-01
A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.
Capsule performance optimization in the National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O. L.; Bradley, D. K.; Braun, D. G.
2010-05-15
A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition designmore » and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less
Guidelines for the Procurement of Aerospace Nickel Cadmium Cells
NASA Technical Reports Server (NTRS)
Thierfelder, Helmut
1997-01-01
NASA has been using a Modular Power System containing "standard" nickel cadmium (NiCd) batteries, composed of "standard" NiCd cells. For many years the only manufacturer of the NASA "standard" NiCd cells was General Electric Co. (subsequently Gates Aerospace and now SAFT). This standard cell was successfully used in numerous missions. However, uncontrolled technical changes, and changes in industrial restructuring require a new approach. General Electric (now SAFT Aerospace Batteries) had management changes, new manufacturers entered the market (Eagle-Picher Industries, ACME Electric Corporation, Aerospace Division, Sanyo Electric Co.) and battery technology advanced. New NASA procurements for aerospace NiCd cells will have specifications unique to the spacecraft and mission requirements. This document provides the user/customer guidelines for the new approach to procuring of and specifying performance requirements for highly reliable NiCd cells and batteries. It includes details of key parameters and their importance. The appendices contain a checklist, detailed calculations, and backup information.
From LCAs to simplified models: a generic methodology applied to wind power electricity.
Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle
2013-02-05
This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.
Molecular parameters of head and neck cancer metastasis
Bhave, Sanjay L.; Teknos, Theodoros N.; Pan, Quintin; James, Arthur G.; Solove, Richard J.
2011-01-01
Metastasis remains a major cause of mortality in patients with head and neck squamous cell carcinoma (HNSCC). HNSCC patients with metastatic disease have extremely poor prognosis with survival rate of less than a year. Metastasis is an intricate sequential process which requires a discrete population of tumor cells to possess the capacity to intravasate from the primary tumor into systemic circulation, survive in circulation, extravasate at a distant site, and proliferate in a foreign hostile environment. Literature has accumulated to provide mechanistic insight into several signal transduction pathways, receptor tyrosine kinases (RTKs), signal transducer and activator of transcription 3 (Stat3), Rho GTPases, protein kinase Cε (PKCε), and nuclear factor-κB (NF-κB), that are involved in mediating a metastatic tumor cell phenotype in HNSCC. Here we highlight accrued information regarding the key molecular parameters of HNSCC metastasis. PMID:22077153
Preparation Prerequisites for Effective Irrigation of Apical Root Canal: A Critical Review.
Tziafas, Dimitrios; Alraeesi, Dana; Al Hormoodi, Reem; Ataya, Maamoun; Fezai, Hessa; Aga, Nausheen
2017-10-01
It is well recognized that disinfection of the complex root canal system at the apical root canal remains the most critical therapeutic measure to treat apical periodontitis. Observational and experimental data in relation to the anatomy of the apical root canal in different tooth types and the cross sectional diameters of the apical part of the most commonly used hand and rotary files are critically reviewed. The present data analysis confirm that the challenging issue of antibacterial efficacy of modern preparation protocols in non-surgical endodontics requires more attention to apical root canal irrigation as a balance between safety and effectiveness. Ex vivo investigations clearly indicate that a specific design of the chemo-mechanical preparation is needed at the onset of RCT, more particularly in infected teeth. Design should be based on specific anatomical parameters, and must determine the appropriate size and taper of preparation as pre-requirements for effective and safe apical irrigation. The optimal irrigation protocols might be designed on the basis of technical specifications of the preparations procedures, such as the penetration depth, the type of the needle, the required time for continuous irrigant flow, the concentration of NaOCl, and the activation parameters. Key words: Endodontics, root canal treatment, instrumentation, irrigation, apical root canal.
Vialet-Chabrand, Silvere; Griffiths, Howard
2017-01-01
The physical requirement for charge to balance across biological membranes means that the transmembrane transport of each ionic species is interrelated, and manipulating solute flux through any one transporter will affect other transporters at the same membrane, often with unforeseen consequences. The OnGuard systems modeling platform has helped to resolve the mechanics of stomatal movements, uncovering previously unexpected behaviors of stomata. To date, however, the manual approach to exploring model parameter space has captured little formal information about the emergent connections between parameters that define the most interesting properties of the system as a whole. Here, we introduce global sensitivity analysis to identify interacting parameters affecting a number of outputs commonly accessed in experiments in Arabidopsis (Arabidopsis thaliana). The analysis highlights synergies between transporters affecting the balance between Ca2+ sequestration and Ca2+ release pathways, notably those associated with internal Ca2+ stores and their turnover. Other, unexpected synergies appear, including with the plasma membrane anion channels and H+-ATPase and with the tonoplast TPK K+ channel. These emergent synergies, and the core hubs of interaction that they define, identify subsets of transporters associated with free cytosolic Ca2+ concentration that represent key targets to enhance plant performance in the future. They also highlight the importance of interactions between the voltage regulation of the plasma membrane and tonoplast in coordinating transport between the different cellular compartments. PMID:28432256
Key parameters design of an aerial target detection system on a space-based platform
NASA Astrophysics Data System (ADS)
Zhu, Hanlu; Li, Yejin; Hu, Tingliang; Rao, Peng
2018-02-01
To ensure flight safety of an aerial aircraft and avoid recurrence of aircraft collisions, a method of multi-information fusion is proposed to design the key parameter to realize aircraft target detection on a space-based platform. The key parameters of a detection wave band and spatial resolution using the target-background absolute contrast, target-background relative contrast, and signal-to-clutter ratio were determined. This study also presented the signal-to-interference ratio for analyzing system performance. Key parameters are obtained through the simulation of a specific aircraft. And the simulation results show that the boundary ground sampling distance is 30 and 35 m in the mid- wavelength infrared (MWIR) and long-wavelength infrared (LWIR) bands for most aircraft detection, and the most reasonable detection wavebands is 3.4 to 4.2 μm and 4.35 to 4.5 μm in the MWIR bands, and 9.2 to 9.8 μm in the LWIR bands. We also found that the direction of detection has a great impact on the detection efficiency, especially in MWIR bands.
Novel image encryption algorithm based on multiple-parameter discrete fractional random transform
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Dong, Taiji; Wu, Jianhua
2010-08-01
A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.
Sense-and-Avoid Equivalent Level of Safety Definition for Unmanned Aircraft Systems. Revision 9
NASA Technical Reports Server (NTRS)
2005-01-01
Since unmanned aircraft do not have a pilot on-board the aircraft, they cannot literally comply with the "see and avoid" requirement beyond a short distance from the location of the unmanned pilot. No performance standards are presently defined for unmanned Sense and Avoid systems, and the FAA has no published approval criteria for a collision avoidance system. Before the FAA can develop the necessary guidance (rules / regulations / policy) regarding the see-and-avoid requirements for Unmanned Aircraft Systems (UAS), a concise understanding of the term "equivalent level of safety" must be attained. Since this term is open to interpretation, the UAS industry and FAA need to come to an agreement on how this term can be defined and applied for a safe and acceptable collision avoidance capability for unmanned aircraft. Defining an equivalent level of safety (ELOS) for sense and avoid is one of the first steps in understanding the requirement and developing a collision avoidance capability. This document provides a functional level definition of see-and-avoid as it applies to unmanned aircraft. The sense and avoid ELOS definition is intended as a bridge between the see and avoid requirement and the system level requirements for unmanned aircraft sense and avoid systems. Sense and avoid ELOS is defined in a rather abstract way, meaning that it is not technology or system specific, and the definition provides key parameters (and a context for those parameters) to focus the development of cooperative and non-cooperative sense and avoid system requirements.
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
NASA Astrophysics Data System (ADS)
Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian
2018-01-01
Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.
Estimating unknown parameters in haemophilia using expert judgement elicitation.
Fischer, K; Lewandowski, D; Janssen, M P
2013-09-01
The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.
21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 9 2014-04-01 2014-04-01 false Requirements for storing and using a private key... Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for digitally... private key. (b) The certificate holder must provide FIPS-approved secure storage for the private key, as...
21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 9 2012-04-01 2012-04-01 false Requirements for storing and using a private key... Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for digitally... private key. (b) The certificate holder must provide FIPS-approved secure storage for the private key, as...
21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 9 2011-04-01 2011-04-01 false Requirements for storing and using a private key... Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for digitally... private key. (b) The certificate holder must provide FIPS-approved secure storage for the private key, as...
21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 9 2013-04-01 2013-04-01 false Requirements for storing and using a private key... Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for digitally... private key. (b) The certificate holder must provide FIPS-approved secure storage for the private key, as...
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
Novel Estimation of Pilot Performance Characteristics
NASA Technical Reports Server (NTRS)
Bachelder, Edward N.; Aponso, Bimal
2017-01-01
Two mechanisms internal to the pilot that affect performance during a tracking task are: 1) Pilot equalization (i.e. lead/lag); and 2) Pilot gain (i.e. sensitivity to the error signal). For some applications McRuer's Crossover Model can be used to anticipate what equalization will be employed to control a vehicle's dynamics. McRuer also established approximate time delays associated with different types of equalization - the more cognitive processing that is required due to equalization difficulty, the larger the time delay. However, the Crossover Model does not predict what the pilot gain will be. A nonlinear pilot control technique, observed and coined by the authors as 'amplitude clipping', is shown to improve stability, performance, and reduce workload when employed with vehicle dynamics that require high lead compensation by the pilot. Combining linear and nonlinear methods a novel approach is used to measure the pilot control parameters when amplitude clipping is present, allowing precise measurement in real time of key pilot control parameters. Based on the results of an experiment which was designed to probe workload primary drivers, a method is developed that estimates pilot spare capacity from readily observable measures and is tested for generality using multi-axis flight data. This paper documents the initial steps to developing a novel, simple objective metric for assessing pilot workload and its variation over time across a wide variety of tasks. Additionally, it offers a tangible, easily implementable methodology for anticipating a pilot's operating parameters and workload, and an effective design tool. The model shows promise in being able to precisely predict the actual pilot settings and workload, and observed tolerance of pilot parameter variation over the course of operation. Finally, an approach is proposed for generating Cooper-Harper ratings based on the workload and parameter estimation methodology.
Robust determination of surface relaxivity from nuclear magnetic resonance DT2 measurements
NASA Astrophysics Data System (ADS)
Luo, Zhi-Xiang; Paulsen, Jeffrey; Song, Yi-Qiao
2015-10-01
Nuclear magnetic resonance (NMR) is a powerful tool to probe into geological materials such as hydrocarbon reservoir rocks and groundwater aquifers. It is unique in its ability to obtain in situ the fluid type and the pore size distributions (PSD). The T1 and T2 relaxation times are closely related to the pore geometry through the parameter called surface relaxivity. This parameter is critical for converting the relaxation time distribution into the PSD and so is key to accurately predicting permeability. The conventional way to determine the surface relaxivity ρ2 had required independent laboratory measurements of the pore size. Recently Zielinski et al. proposed a restricted diffusion model to extract the surface relaxivity from the NMR diffusion-T2 relaxation (DT2) measurement. Although this method significantly improved the ability to directly extract surface relaxivity from a pure NMR measurement, there are inconsistencies with their model and it relies on a number of preset parameters. Here we propose an improved signal model to incorporate a scalable LT and extend their method to extract the surface relaxivity based on analyzing multiple DT2 maps with varied diffusion observation time. With multiple diffusion observation times, the apparent diffusion coefficient correctly describes the restricted diffusion behavior in samples with wide PSDs, and the new method does not require predetermined parameters, such as the bulk diffusion coefficient and tortuosity. Laboratory experiments on glass beads packs with the beads diameter ranging from 50 μm to 500 μm are used to validate the new method. The extracted diffusion parameters are consistent with their known values and the determined surface relaxivity ρ2 agrees with the expected value within ±7%. This method is further successfully applied on a Berea sandstone core and yields surface relaxivity ρ2 consistent with the literature.
Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties
USDA-ARS?s Scientific Manuscript database
Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as alter the...
Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties
USDA-ARS?s Scientific Manuscript database
Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as skew the...
Needs for Robotic Assessments of Nuclear Disasters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Victor Walker; Derek Wadsworth
Following the nuclear disaster at the Fukushima nuclear reactor plant in Japan, the need for systems which can assist in dynamic high-radiation environments such as nuclear incidents has become more apparent. The INL participated in delivering robotic technologies to Japan and has identified key components which are needed for success and obstacles to their deployment. In addition, we are proposing new work and methods to improve assessments and reactions to such events in the future. Robotics needs in disaster situations include phases such as: Assessment, Remediation, and Recovery Our particular interest is in the initial assessment activities. In assessment wemore » need collection of environmental parameters, determination of conditions, and physical sample collection. Each phase would require key tools and efforts to develop. This includes study of necessary sensors and their deployment methods, the effects of radiation on sensors and deployment, and the development of training and execution systems.« less
Reversible integer wavelet transform for blind image hiding method
Bibi, Nargis; Mahmood, Zahid; Akram, Tallha; Naqvi, Syed Rameez
2017-01-01
In this article, a blind data hiding reversible methodology to embed the secret data for hiding purpose into cover image is proposed. The key advantage of this research work is to resolve the privacy and secrecy issues raised during the data transmission over the internet. Firstly, data is decomposed into sub-bands using the integer wavelets. For decomposition, the Fresnelet transform is utilized which encrypts the secret data by choosing a unique key parameter to construct a dummy pattern. The dummy pattern is then embedded into an approximated sub-band of the cover image. Our proposed method reveals high-capacity and great imperceptibility of the secret embedded data. With the utilization of family of integer wavelets, the proposed novel approach becomes more efficient for hiding and retrieving process. It retrieved the secret hidden data from the embedded data blindly, without the requirement of original cover image. PMID:28498855
Microbial quantification in activated sludge: the hits and misses.
Hall, S J; Keller, J; Blackall, L L
2003-01-01
Since the implementation of the activated sludge process for treating wastewater, there has been a reliance on chemical and physical parameters to monitor the system. However, in biological nutrient removal (BNR) processes, the microorganisms responsible for some of the transformations should be used to monitor the processes with the overall goal to achieve better treatment performance. The development of in situ identification and rapid quantification techniques for key microorganisms involved in BNR are required to achieve this goal. This study explored the quantification of Nitrospira, a key organism in the oxidation of nitrite to nitrate in BNR. Two molecular genetic microbial quantification techniques were evaluated: real-time polymerase chain reaction (PCR) and fluorescence in situ hybridisation (FISH) followed by digital image analysis. A correlation between the Nitrospira quantitative data and the nitrate production rate, determined in batch tests, was attempted. The disadvantages and advantages of both methods will be discussed.
ProFound: Source Extraction and Application to Modern Survey Data
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.
2018-04-01
ProFound detects sources in noisy images, generates segmentation maps identifying the pixels belonging to each source, and measures statistics like flux, size, and ellipticity. These inputs are key requirements of ProFit (ascl:1612.004), our galaxy profiling package; these two packages used in unison semi-automatically profile large samples of galaxies. The key novel feature introduced in ProFound is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. ProFound offers good initial parameter estimation for ProFit, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the ProFound and ProFit pipeline.
Two dimensional radial gas flows in atmospheric pressure plasma-enhanced chemical vapor deposition
NASA Astrophysics Data System (ADS)
Kim, Gwihyun; Park, Seran; Shin, Hyunsu; Song, Seungho; Oh, Hoon-Jung; Ko, Dae Hong; Choi, Jung-Il; Baik, Seung Jae
2017-12-01
Atmospheric pressure (AP) operation of plasma-enhanced chemical vapor deposition (PECVD) is one of promising concepts for high quality and low cost processing. Atmospheric plasma discharge requires narrow gap configuration, which causes an inherent feature of AP PECVD. Two dimensional radial gas flows in AP PECVD induces radial variation of mass-transport and that of substrate temperature. The opposite trend of these variations would be the key consideration in the development of uniform deposition process. Another inherent feature of AP PECVD is confined plasma discharge, from which volume power density concept is derived as a key parameter for the control of deposition rate. We investigated deposition rate as a function of volume power density, gas flux, source gas partial pressure, hydrogen partial pressure, plasma source frequency, and substrate temperature; and derived a design guideline of deposition tool and process development in terms of deposition rate and uniformity.
Dynamic metabolic modeling for a MAB bioprocess.
Gao, Jianying; Gorenflo, Volker M; Scharer, Jeno M; Budman, Hector M
2007-01-01
Production of monoclonal antibodies (MAb) for diagnostic or therapeutic applications has become an important task in the pharmaceutical industry. The efficiency of high-density reactor systems can be potentially increased by model-based design and control strategies. Therefore, a reliable kinetic model for cell metabolism is required. A systematic procedure based on metabolic modeling is used to model nutrient uptake and key product formation in a MAb bioprocess during both the growth and post-growth phases. The approach combines the key advantages of stoichiometric and kinetic models into a complete metabolic network while integrating the regulation and control of cellular activity. This modeling procedure can be easily applied to any cell line during both the cell growth and post-growth phases. Quadratic programming (QP) has been identified as a suitable method to solve the underdetermined constrained problem related to model parameter identification. The approach is illustrated for the case of murine hybridoma cells cultivated in stirred spinners.
A novel and lightweight system to secure wireless medical sensor networks.
He, Daojing; Chan, Sammy; Tang, Shaohua
2014-01-01
Wireless medical sensor networks (MSNs) are a key enabling technology in e-healthcare that allows the data of a patient's vital body parameters to be collected by the wearable or implantable biosensors. However, the security and privacy protection of the collected data is a major unsolved issue, with challenges coming from the stringent resource constraints of MSN devices, and the high demand for both security/privacy and practicality. In this paper, we propose a lightweight and secure system for MSNs. The system employs hash-chain based key updating mechanism and proxy-protected signature technique to achieve efficient secure transmission and fine-grained data access control. Furthermore, we extend the system to provide backward secrecy and privacy preservation. Our system only requires symmetric-key encryption/decryption and hash operations and is thus suitable for the low-power sensor nodes. This paper also reports the experimental results of the proposed system in a network of resource-limited motes and laptop PCs, which show its efficiency in practice. To the best of our knowledge, this is the first secure data transmission and access control system for MSNs until now.
Key optical components for spaceborne lasers
NASA Astrophysics Data System (ADS)
Löhring, J.; Winzen, M.; Faidel, H.; Miesner, J.; Plum, D.; Klein, J.; Fitzau, O.; Giesberts, M.; Brandenburg, W.; Seidel, A.; Schwanen, N.; Riesters, D.; Hengesbach, S.; Hoffmann, H.-D.
2016-03-01
Spaceborne lidar (light detection and ranging) systems have a large potential to become powerful instruments in the field of atmospheric research. Obviously, they have to be in operation for about three years without any maintenance like readjusting. Furthermore, they have to withstand strong temperature cycles typically in the range of -30 to +50 °C as well as mechanical shocks and vibrations, especially during launch. Additionally, the avoidance of any organic material inside the laser box is required, particularly in UV lasers. For atmospheric research pulses of about several 10 mJ at repetition rates of several 10 Hz are required in many cases. Those parameters are typically addressed by DPSSL that comprise components like: laser crystals, nonlinear crystals in pockels cells, faraday isolators and frequency converters, passive fibers, diode lasers and of course a lot of mirrors and lenses. In particular, some components have strong requirements regarding their tilt stability that is often in the 10 μrad range. In most of the cases components and packages that are used for industrial lasers do not fulfil all those requirements. Thus, the packaging of all these key components has been developed to meet those specifications only making use of metal and ceramics beside the optical component itself. All joints between the optical component and the laser baseplate are soldered or screwed. No clamps or adhesives are used. Most of the critical properties like tilting after temperature cycling have been proven in several tests. Currently, these components are used to build up first prototypes for spaceborne systems.
Removal of Methylene Blue from aqueous solution using spent bleaching earth
NASA Astrophysics Data System (ADS)
Saputra, E.; Saputra, R.; Nugraha, M. W.; Irianty, R. S.; Utama, P. S.
2018-04-01
The waste from industrial textile waste is one of the environmental problems, it is required effective and efficient processing. In this study spent bleaching earth was used as absorbent. It was found that the absorbent was effective to remove methylene blue from aqueous solution with removal efficiency 99.97 % in 120 min. Several parameters such as pH, amount of absorbent loading, stirring speed are found as key factor influencing removal of methylene blue. The mechanism of adsorption was also studied, and it was found that Langmuir isotherm fitted to data of experiment with adsorption capacity 0.5 mg/g.
Development of a composite geodetic structure for space construction, phase 2
NASA Technical Reports Server (NTRS)
1981-01-01
Primary physical and mechanical properties were defined for pultruded hybrid HMS/E-glass P1700 rod material used for the fabrication of geodetic beams. Key properties established were used in the analysis, design, fabrication, instrumentation, and testing of a geodetic parameter cylinder and a lattice cone closeout joined to a short cylindrical geodetic beam segment. Requirements of structural techniques were accomplished. Analytical procedures were refined and extended to include the effect of rod dimensions for the helical and longitudinal members on local buckling, and the effect of different flexural and extensional moduli on general instability buckling.
Rotary Wing Deceleration Use on Titan
NASA Technical Reports Server (NTRS)
Young, Larry A.; Steiner, Ted J.
2011-01-01
Rotary wing decelerator (RWD) systems were compared against other methods of atmospheric deceleration and were determined to show significant potential for application to a system requiring controlled descent, low-velocity landing, and atmospheric research capability on Titan. Design space exploration and down-selection results in a system with a single rotor utilizing cyclic pitch control. Models were developed for selection of a RWD descent system for use on Titan and to determine the relationships between the key design parameters of such a system and the time of descent. The possibility of extracting power from the system during descent was also investigated.
Aiassa, E; Higgins, J P T; Frampton, G K; Greiner, M; Afonso, A; Amzal, B; Deeks, J; Dorne, J-L; Glanville, J; Lövei, G L; Nienstedt, K; O'connor, A M; Pullin, A S; Rajić, A; Verloo, D
2015-01-01
Food and feed safety risk assessment uses multi-parameter models to evaluate the likelihood of adverse events associated with exposure to hazards in human health, plant health, animal health, animal welfare, and the environment. Systematic review and meta-analysis are established methods for answering questions in health care, and can be implemented to minimize biases in food and feed safety risk assessment. However, no methodological frameworks exist for refining risk assessment multi-parameter models into questions suitable for systematic review, and use of meta-analysis to estimate all parameters required by a risk model may not be always feasible. This paper describes novel approaches for determining question suitability and for prioritizing questions for systematic review in this area. Risk assessment questions that aim to estimate a parameter are likely to be suitable for systematic review. Such questions can be structured by their "key elements" [e.g., for intervention questions, the population(s), intervention(s), comparator(s), and outcome(s)]. Prioritization of questions to be addressed by systematic review relies on the likely impact and related uncertainty of individual parameters in the risk model. This approach to planning and prioritizing systematic review seems to have useful implications for producing evidence-based food and feed safety risk assessment.
Diurnal variations in blood gases and metabolites for draught Zebu and Simmental oxen.
Zanzinger, J; Hoffmann, I; Becker, K
1994-01-01
In previous articles it has been shown that blood parameters may be useful to assess physical fitness in draught cattle. The aim of the present study was to detect possible variations in baseline values for the key metabolites: lactate and free fatty acids (FFA), and for blood gases in samples drawn from a catheterized jugular vein. Sampling took place immediately after venipuncture at intervals of 3 min for 1 hr in Simmental oxen (N = 6) and during a period of 24 hr at intervals of 60 min for Zebu (N = 4) and Simmental (N = 6) oxen. After puncture of the vein, plasma FFA and oxygen (pvO2) were elevated for approximately 15 min. All parameters returned to baseline values within 1 hr of the catheter being inserted. Twenty-four-hour mean baseline values for all measured parameters were significantly different (P < or = 0.001) between Zebu and Simmental. All parameters elicited diurnal variations which were mainly related to feed intake. The magnitude of these variations is comparable to the responses to light draught work. It is concluded that a strict standardization of blood sampling, at least in respect of time after feeding, is required for a reliable interpretation of endurance-indicating blood parameters measured under field conditions.
Precision ephemerides for gravitational-wave searches - III. Revised system parameters of Sco X-1
NASA Astrophysics Data System (ADS)
Wang, L.; Steeghs, D.; Galloway, D. K.; Marsh, T.; Casares, J.
2018-06-01
Neutron stars in low-mass X-ray binaries are considered promising candidate sources of continuous gravitational-waves. These neutron stars are typically rotating many hundreds of times a second. The process of accretion can potentially generate and support non-axisymmetric distortions to the compact object, resulting in persistent emission of gravitational-waves. We present a study of existing optical spectroscopic data for Sco X-1, a prime target for continuous gravitational-wave searches, with the aim of providing revised constraints on key orbital parameters required for a directed search with advanced-LIGO data. From a circular orbit fit to an improved radial velocity curve of the Bowen emission components, we derived an updated orbital period and ephemeris. Centre of symmetry measurements from the Bowen Doppler tomogram yield a centre of the disc component of 90 km s-1, which we interpret as a revised upper limit to the projected orbital velocity of the NS K1. By implementing Monte Carlo binary parameter calculations, and imposing new limits on K1 and the rotational broadening, we obtained a complete set of dynamical system parameter constraints including a new range for K1 of 40-90 km s-1. Finally, we discussed the implications of the updated orbital parameters for future continuous-waves searches.
A robust nonlinear position observer for synchronous motors with relaxed excitation conditions
NASA Astrophysics Data System (ADS)
Bobtsov, Alexey; Bazylev, Dmitry; Pyrkin, Anton; Aranovskiy, Stanislav; Ortega, Romeo
2017-04-01
A robust, nonlinear and globally convergent rotor position observer for surface-mounted permanent magnet synchronous motors was recently proposed by the authors. The key feature of this observer is that it requires only the knowledge of the motor's resistance and inductance. Using some particular properties of the mathematical model it is shown that the problem of state observation can be translated into one of estimation of two constant parameters, which is carried out with a standard gradient algorithm. In this work, we propose to replace this estimator with a new one called dynamic regressor extension and mixing, which has the following advantages with respect to gradient estimators: (1) the stringent persistence of excitation (PE) condition of the regressor is not necessary to ensure parameter convergence; (2) the latter is guaranteed requiring instead a non-square-integrability condition that has a clear physical meaning in terms of signal energy; (3) if the regressor is PE, the new observer (like the old one) ensures convergence is exponential, entailing some robustness properties to the observer; (4) the new estimator includes an additional filter that constitutes an additional degree of freedom to satisfy the non-square integrability condition. Realistic simulation results show significant performance improvement of the position observer using the new parameter estimator, with a less oscillatory behaviour and a faster convergence speed.
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform).
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform). PMID:26745370
NASA Technical Reports Server (NTRS)
Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.
2009-01-01
Composite Overwrapped Pressure Vessels (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Flight certification is dependent on the reliability analysis to quantify the risk of stress rupture failure in existing flight vessels. Full certification of this reliability model would require a statistically significant number of lifetime tests to be performed and is impractical given the cost and limited flight hardware for certification testing purposes. One approach to confirm the reliability model is to perform a stress rupture test on a flight COPV. Currently, testing of such a Kevlar49 (Dupont)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the database and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio model is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one "nine," that is, reducing the predicted probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several vessels would be necessary.
MST radar transmitter control and monitor system
NASA Technical Reports Server (NTRS)
Brosnahan, J. W.
1983-01-01
A generalized transmitter control and monitor card was developed using the Intel 8031 (8051 family) microprocessor. The design was generalized so that this card can be utilized for virtually any control application with only firmware changes. The block diagram appears in Figure 2. The card provides for local control using a 16 key keypad (up to 64 keys are supported). The local display is four digits of 7 segment LEDs. The display can indicate the status of all major system parameters and provide voltage readout for the analog signal inputs. The card can be populated with only the chips required for a given application. Fully populated, the card has two RS-232 serial ports for computer communications. It has a total of 48 TTL parallel lines that can define as either inputs or outputs in groups of four. A total of 32 analog inputs with a 0-5 volt range are supported. In addition, a real-time clock/calendar is available if required. A total of 16 k bytes of ROM and 16 k bytes of RAM is available for programming. This card can be the basis of virtually any monitor or control system with appropriate software.
SEEPLUS: A SIMPLE ONLINE CLIMATE MODEL
NASA Astrophysics Data System (ADS)
Tsutsui, Junichi
A web application for a simple climate model - SEEPLUS (a Simple climate model to Examine Emission Pathways Leading to Updated Scenarios) - has been developed. SEEPLUS consists of carbon-cycle and climate-change modules, through which it provides the information infrastructure required to perform climate-change experiments, even on a millennial-timescale. The main objective of this application is to share the latest scientific knowledge acquired from climate modeling studies among the different stakeholders involved in climate-change issues. Both the carbon-cycle and climate-change modules employ impulse response functions (IRFs) for their key processes, thereby enabling the model to integrate the outcome from an ensemble of complex climate models. The current IRF parameters and forcing manipulation are basically consistent with, or within an uncertainty range of, the understanding of certain key aspects such as the equivalent climate sensitivity and ocean CO2 uptake data documented in representative literature. The carbon-cycle module enables inverse calculation to determine the emission pathway required in order to attain a given concentration pathway, thereby providing a flexible way to compare the module with more advanced modeling studies. The module also enables analytical evaluation of its equilibrium states, thereby facilitating the long-term planning of global warming mitigation.
5G: rethink mobile communications for 2020+.
Chih-Lin, I; Han, Shuangfeng; Xu, Zhikun; Sun, Qi; Pan, Zhengang
2016-03-06
The 5G network is anticipated to meet the challenging requirements of mobile traffic in the 2020s, which are characterized by super high data rate, low latency, high mobility, high energy efficiency and high traffic density. This paper provides an overview of China Mobile's 5G vision and potential solutions. Three key characteristics of 5G are analysed, i.e. super fast, soft and green. The main 5G R&D themes are further elaborated, which include five fundamental rethinkings of the traditional design methodologies. The 5G network design considerations are also discussed, with cloud radio access network, ultra-dense network, software defined network and network function virtualization examined as key potential solutions towards a green and soft 5G network. The paradigm shift to user-centric network operation from the traditional cell-centric operation is also investigated, where the decoupled downlink and uplink, control and data, and adaptive multiple connections provide sufficient means to achieve a user-centric 5G network with 'no more cells'. The software defined air interface is investigated under a uniform framework and can adaptively adapt the parameters to well satisfy various requirements in different 5G scenarios. © 2016 The Author(s).
A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines
NASA Astrophysics Data System (ADS)
Wang, Bin; Zhao, Haocen; Ye, Zhifeng
2017-08-01
Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.
NASA Astrophysics Data System (ADS)
Harrison, Paul M.; Ellwi, Samir
2009-02-01
Within the vast range of laser materials processing applications, every type of successful commercial laser has been driven by a major industrial process. For high average power, high peak power, nanosecond pulse duration Nd:YAG DPSS lasers, the enabling process is high speed surface engineering. This includes applications such as thin film patterning and selective coating removal in markets such as the flat panel displays (FPD), solar and automotive industries. Applications such as these tend to require working spots that have uniform intensity distribution using specific shapes and dimensions, so a range of innovative beam delivery systems have been developed that convert the gaussian beam shape produced by the laser into a range of rectangular and/or shaped spots, as required by demands of each project. In this paper the authors will discuss the key parameters of this type of laser and examine why they are important for high speed surface engineering projects, and how they affect the underlying laser-material interaction and the removal mechanism. Several case studies will be considered in the FPD and solar markets, exploring the close link between the application, the key laser characteristics and the beam delivery system that link these together.
NASA Astrophysics Data System (ADS)
Kameswara Rao, P. V.; Rawal, Amit; Kumar, Vijay; Rajput, Krishn Gopal
2017-10-01
Absorptive glass mat (AGM) separators play a key role in enhancing the cycle life of the valve regulated lead acid (VRLA) batteries by maintaining the elastic characteristics under a defined level of compression force with the plates of the electrodes. Inevitably, there are inherent challenges to maintain the required level of compression characteristics of AGM separators during the charge and discharge of the battery. Herein, we report a three-dimensional (3D) analytical model for predicting the compression-recovery behavior of AGM separators by formulating a direct relationship with the constituent fiber and structural parameters. The analytical model of compression-recovery behavior of AGM separators has successfully included the fiber slippage criterion and internal friction losses. The presented work uses, for the first time, 3D data of fiber orientation from X-ray micro-computed tomography, for predicting the compression-recovery behavior of AGM separators. A comparison has been made between the theoretical and experimental results of compression-recovery behavior of AGM samples with defined fiber orientation characteristics. In general, the theory agreed reasonably well with the experimental results of AGM samples in both dry and wet states. Through theoretical modeling, fiber volume fraction was established as one of the key structural parameters that modulates the compression hysteresis of an AGM separator.
Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle
NASA Technical Reports Server (NTRS)
Ali, Yasmin; Radke, Tara; Chuhta, Jesse; Hughes, Michael
2014-01-01
Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics model and to verify no recontact. NASA Orion Multi-Purpose Crew Vehicle (MPCV) teams examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the Forward Bay Cover (FBC) separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute parameters, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1, but more testing is required to support human certification, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust but affordable human spacecraft capability.
On the issues of probability distribution of GPS carrier phase observations
NASA Astrophysics Data System (ADS)
Luo, X.; Mayer, M.; Heck, B.
2009-04-01
In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS
Economic Efficiency and Investment Timing for Dual Water Systems
NASA Astrophysics Data System (ADS)
Leconte, Robert; Hughes, Trevor C.; Narayanan, Rangesan
1987-10-01
A general methodology to evaluate the economic feasibility of dual water systems is presented. In a first step, a static analysis (evaluation at a single point in time) is developed. The analysis requires the evaluation of consumers' and producer's surpluses from water use and the capital cost of the dual (outdoor) system. The analysis is then extended to a dynamic approach where the water demand increases with time (as a result of a population increase) and where the dual system is allowed to expand. The model determines whether construction of a dual system represents a net benefit, and if so, what is the best time to initiate the system (corresponding to maximization of social welfare). Conditions under which an analytic solution is possible are discussed and results of an application are summarized (including sensitivity to different parameters). The analysis allows identification of key parameters influencing attractiveness of dual water systems.
NASA Astrophysics Data System (ADS)
Chandrakanth, Balaji; Venkatesan, G; Prakash Kumar, L. S. S; Jalihal, Purnima; Iniyan, S
2018-03-01
The present work discusses the design and selection of a shell and tube condenser used in Low Temperature Thermal Desalination (LTTD). To optimize the key geometrical and process parameters of the condenser with multiple parameters and levels, a design of an experiment approach using Taguchi method was chosen. An orthogonal array (OA) of 25 designs was selected for this study. The condenser was designed, analysed using HTRI software and the heat transfer area with respective tube side pressure drop were computed using the same, as these two objective functions determine the capital and running cost of the condenser. There was a complex trade off between the heat transfer area and pressure drop in the analysis, however second law analysis was worked out for determining the optimal heat transfer area vs pressure drop for condensing the required heat load.
Proof of concept of a novel SMA cage actuator
NASA Astrophysics Data System (ADS)
Deyer, Christopher W.; Brei, Diann E.
2001-06-01
Numerous industrial applications that currently utilize expensive solenoids or slow wax motors are good candidates for smart material actuation. Many of these applications require millimeter-scale displacement and low cost; thereby, eliminating piezoelectric technologies. Fortunately, there is a subset of these applications that can tolerate the slower response of shape memory alloys. This paper details a proof-of-concept study of a novel SMA cage actuator intended for proportional braking in commercial appliances. The chosen actuator architecture consists of a SMA wire cage enclosing a return spring. To develop an understanding of the influences of key design parameters on the actuator response time and displacement amplitude, a half-factorial 25 Design of Experiment (DOE) study was conducted utilizing eight differently configured prototypes. The DOE results guided the selection of the design parameters for the final proof-of-concept actuator. This actuator was built and experimentally characterized for stroke, proportional control and response time.
Precision Viticulture from Multitemporal, Multispectral Very High Resolution Satellite Data
NASA Astrophysics Data System (ADS)
Kandylakis, Z.; Karantzalos, K.
2016-06-01
In order to exploit efficiently very high resolution satellite multispectral data for precision agriculture applications, validated methodologies should be established which link the observed reflectance spectra with certain crop/plant/fruit biophysical and biochemical quality parameters. To this end, based on concurrent satellite and field campaigns during the veraison period, satellite and in-situ data were collected, along with several grape samples, at specific locations during the harvesting period. These data were collected for a period of three years in two viticultural areas in Northern Greece. After the required data pre-processing, canopy reflectance observations, through the combination of several vegetation indices were correlated with the quantitative results from the grape/must analysis of grape sampling. Results appear quite promising, indicating that certain key quality parameters (like brix levels, total phenolic content, brix to total acidity, anthocyanin levels) which describe the oenological potential, phenolic composition and chromatic characteristics can be efficiently estimated from the satellite data.
Design of a Single Motor Based Leg Structure with the Consideration of Inherent Mechanical Stability
NASA Astrophysics Data System (ADS)
Taha Manzoor, Muhammad; Sohail, Umer; Noor-e-Mustafa; Nizami, Muhammad Hamza Asif; Ayaz, Yasar
2017-07-01
The fundamental aspect of designing a legged robot is constructing a leg design that is robust and presents a simple control problem. In this paper, we have successfully designed a robotic leg based on a unique four bar mechanism with only one motor per leg. The leg design parameters used in our platform are extracted from design principles used in biological systems, multiple iterations and previous research findings. These principles guide a robotic leg to have minimal mechanical passive impedance, low leg mass and inertia, a suitable foot trajectory utilizing a practical balance between leg kinematics and robot usage, and the resultant inherent mechanical stability. The designed platform also exhibits the key feature of self-locking. Theoretical tools and software iterations were used to derive these practical features and yield an intuitive sense of the required leg design parameters.
NASA Astrophysics Data System (ADS)
Peithmann, K.; Eversheim, P.-D.; Goetze, J.; Haaks, M.; Hattermann, H.; Haubrich, S.; Hinterberger, F.; Jentjens, L.; Mader, W.; Raeth, N. L.; Schmid, H.; Zamani-Meymian, M.-R.; Maier, K.
2011-10-01
Ferroelectric lithium niobate crystals offer a great potential for applications in modern optics. To provide powerful optical components, tailoring of key material parameters, especially of the refractive index n and the ferroelectric domain landscape, is required. Irradiation of lithium niobate crystals with accelerated ions causes strong structured modifications in the material. The effects induced by low-mass, high-energy ions (such as 3He with 41 MeV, which are not implanted, but transmit through the entire crystal volume) are reviewed. Irradiation yields large changes of the refractive index Δn, improved domain engineering capability within the material along the ion track, and waveguiding structures. The periodic modification of Δn as well as the formation of periodically poled lithium niobate (PPLN) (supported by radiation damage) is described. Two-step knock-on displacement processes, 3He→Nb and 3He→O causing thermal spikes, are identified as origin for the material modifications.
Numerical Simulation Of Cratering Effects In Adobe
2013-07-01
DEVELOPMENT OF MATERIAL PARAMETERS .........................................................7 PROBLEM SETUP...37 PARAMETER ADJUSTMENTS ......................................................................................38 GLOSSARY...dependent yield surface with the Geological Yield Surface (GEO) modeled in CTH using well characterized adobe. By identifying key parameters that
NASA Astrophysics Data System (ADS)
Haeffelin, Martial
2016-04-01
Radiation fog formation is largely influenced by the chemical composition, size and number concentration of cloud condensation nuclei and by heating/cooling and drying/moistening processes in a shallow mixing layer near the surface. Once a fog water layer is formed, its development and dissipation become predominantly controlled by radiative cooling/heating, turbulent mixing, sedimentation and deposition. Key processes occur in the atmospheric surface layer, directly in contact with the soil and vegetation, and throughout the atmospheric column. Recent publications provide detailed descriptions of these processes for idealized cases using very high-resolution models and proper representation of microphysical processes. Studying these processes in real fog situations require atmospheric profiling capabilities to monitor the temporal evolution of key parameters at several heights (surface, inside the fog, fog top, free troposphere). This could be done with in-situ sensors flown on tethered balloons or drones, during dedicated intensive field campaigns. In addition Backscatter Lidars, Doppler Lidars, Microwave Radiometers and Cloud Doppler Radars can provide more continuous, yet precise monitoring of key parameters throughout the fog life cycle. The presentation will describe how Backscatter Lidars can be used to study the height and kinetics of aerosol activation into fog droplets. Next we will show the potential of Cloud Doppler Radar measurements to characterize the temporal evolution of droplet size, liquid water content, sedimentation and deposition. Contributions from Doppler Lidars and Microwave Radiometers will be discussed. This presentation will conclude on the potential to use Lidar and Radar remote sensing measurements to support operational fog nowcasting.
Rate-loss analysis of an efficient quantum repeater architecture
NASA Astrophysics Data System (ADS)
Guha, Saikat; Krovi, Hari; Fuchs, Christopher A.; Dutton, Zachary; Slater, Joshua A.; Simon, Christoph; Tittel, Wolfgang
2015-08-01
We analyze an entanglement-based quantum key distribution (QKD) architecture that uses a linear chain of quantum repeaters employing photon-pair sources, spectral-multiplexing, linear-optic Bell-state measurements, multimode quantum memories, and classical-only error correction. Assuming perfect sources, we find an exact expression for the secret-key rate, and an analytical description of how errors propagate through the repeater chain, as a function of various loss-and-noise parameters of the devices. We show via an explicit analytical calculation, which separately addresses the effects of the principle nonidealities, that this scheme achieves a secret-key rate that surpasses the Takeoka-Guha-Wilde bound—a recently found fundamental limit to the rate-vs-loss scaling achievable by any QKD protocol over a direct optical link—thereby providing one of the first rigorous proofs of the efficacy of a repeater protocol. We explicitly calculate the end-to-end shared noisy quantum state generated by the repeater chain, which could be useful for analyzing the performance of other non-QKD quantum protocols that require establishing long-distance entanglement. We evaluate that shared state's fidelity and the achievable entanglement-distillation rate, as a function of the number of repeater nodes, total range, and various loss-and-noise parameters of the system. We extend our theoretical analysis to encompass sources with nonzero two-pair-emission probability, using an efficient exact numerical evaluation of the quantum state propagation and measurements. We expect our results to spur formal rate-loss analysis of other repeater protocols and also to provide useful abstractions to seed analyses of quantum networks of complex topologies.
NASA Astrophysics Data System (ADS)
Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin
2017-10-01
The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.
Gariano, John; Neifeld, Mark; Djordjevic, Ivan
2017-01-20
Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
NASA Astrophysics Data System (ADS)
Semenov, Mikhail A.; Stratonovitch, Pierre; Paul, Matthew J.
2017-04-01
Short periods of extreme weather, such as a spell of high temperature or drought during a sensitive stage of development, could result in substantial yield losses due to reduction in grain number and grain size. In a modelling study (Stratonovitch & Semenov 2015), heat tolerance around flowering in wheat was identified as a key trait for increased yield potential in Europe under climate change. Ji et all (Ji et al. 2010) demonstrated cultivar specific responses of yield to drought stress around flowering in wheat. They hypothesised that carbohydrate supply to anthers may be the key in maintaining pollen fertility and grain number in wheat. It was shown in (Nuccio et al. 2015) that genetically modified varieties of maize that increase the concentration of sucrose in ear spikelets, performed better under non-drought and drought conditions in field experiments. The objective of this modelling study was to assess potential benefits of tolerance to drought during reproductive development for wheat yield potential and yield stability across Europe. We used the Sirius wheat model to optimise wheat ideotypes for 2050 (HadGEM2, RCP8.5) climate scenarios at selected European sites. Eight cultivar parameters were optimised to maximise mean yields, including parameters controlling phenology, canopy growth and water limitation. At those sites where water could be limited, ideotypes sensitive to drought produced substantially lower mean yields and higher yield variability compare with tolerant ideotypes. Therefore, tolerance to drought during reproductive development is likely to be required for wheat cultivars optimised for the future climate in Europe in order to achieve high yield potential and yield stability.
Srivastava, Vineet Kumar; Tuteja, Narendra
2014-01-01
Forisomes protein belongs to SEO gene family and is unique to Fabaceae family. These proteins are located in sieve tubes of phloem and function to prevent loss of nutrient-rich photoassimilates, upon mechanical injury/wounding. Forisome protein is also known as ATP independent, mechanically active proteins. Despite the wealth of information role of forisome in plants are not yet fully understood. Recent reports suggest that forisomes protein can act as ideal model to study self assembly mechanism for development of nanotechnological devices like microfluidic system application in space exploration mission. Improvement in micro instrument is highly demanding and has been a key technology by NASA in future space exploration missions. Based on its physical parameters, forisome are found to be ideal biomimetic materials for micro fluidic system because the conformational shifts can be replicated in vitro and are fully reversible over large number of cycles. By the use of protein engineering forisome recombinant protein can be tailored. Due to its unique ability to convert chemical energy into mechanical energy forisome has received much attention. For nanotechnological application and handling biomolecules such as DNA, RNA, protein and cell as a whole microfluidic system will be the most powerful technology. The discovery of new biomimetic smart materials has been a key factor in development of space science and its requirements in such a challenging environment. The field of microfludic, particularly in terms of development of its components along with identification of new biomimetic smart materials, deserves more attention. More biophysical investigation is required to characterize it to make it more suitable under parameters of performance.
Srivastava, Vineet Kumar; Tuteja, Narendra
2014-06-06
Forisomes protein belongs to SEO gene family and is unique to Fabaceae family. These proteins are located in sieve tubes of phloem and function to prevent loss of nutrient-rich photoassimilates, upon mechanical injury/wounding. Forisome protein is also known as ATP independent, mechanically active proteins. Despite the wealth of information role of forisome in plants are not yet fully understood. Recent reports suggest that forisomes protein can act as ideal model to study self assembly mechanism for development of nanotechnological devices like microfluidic system application in space exploration mission. Improvement in micro instrument is highly demanding and has been a key technology by NASA in future space exploration missions. Based on its physical parameters, forisome are found to be ideal biomimetic materials for micro fluidic system because the conformational shifts can be replicated in vitro and are fully reversible over large number of cycles. By the use of protein engineering forisome recombinant protein can be tailored. Due to its unique ability to convert chemical energy into mechanical energy forisome has received much attention. For nanotechnological application and handling biomolecules such as DNA, RNA, protein and cell as a whole microfluidic system will be the most powerful technology. The discovery of new biomimetic smart materials has been a key factor in development of space science and its requirements in such a challenging environment. The field of microfludic, particularly in terms of development of its components along with identification of new biomimetic smart materials, deserves more attention. More biophysical investigation is required to characterize it to make it more suitable under parameters of performance.
Srivastava, Vineet Kumar; Tuteja, Narendra
2014-01-01
Forisomes protein belongs to SEO gene family and is unique to Fabaceae family. These proteins are located in sieve tubes of phloem and function to prevent loss of nutrient-rich photoassimilates, upon mechanical injury/wounding. Forisome protein is also known as ATP independent, mechanically active proteins. Despite the wealth of information role of forisome in plants are not yet fully understood. Recent reports suggest that forisomes protein can act as ideal model to study self assembly mechanism for development of nanotechnological devices like microfluidic system application in space exploration mission. Improvement in micro instrument is highly demanding and has been a key technology by NASA in future space exploration missions. Based on its physical parameters, forisome are found to be ideal biomimetic materials for micro fluidic system because the conformational shifts can be replicated in vitro and are fully reversible over large number of cycles. By the use of protein engineering forisome recombinant protein can be tailored. Due to its unique ability to convert chemical energy into mechanical energy forisome has received much attention. For nanotechnological application and handling biomolecules such as DNA, RNA, protein and cell as a whole microfluidic system will be the most powerful technology. The discovery of new biomimetic smart materials has been a key factor in development of space science and its requirements in such a challenging environment. The field of microfludic, particularly in terms of development of its components along with identification of new biomimetic smart materials, deserves more attention. More biophysical investigation is required to characterize it to make it more suitable under parameters of performance. PMID:25763691
Decreasing Kd uncertainties through the application of thermodynamic sorption models.
Domènech, Cristina; García, David; Pękala, Marek
2015-09-15
Radionuclide retardation processes during transport are expected to play an important role in the safety assessment of subsurface disposal facilities for radioactive waste. The linear distribution coefficient (Kd) is often used to represent radionuclide retention, because analytical solutions to the classic advection-diffusion-retardation equation under simple boundary conditions are readily obtainable, and because numerical implementation of this approach is relatively straightforward. For these reasons, the Kd approach lends itself to probabilistic calculations required by Performance Assessment (PA) calculations. However, it is widely recognised that Kd values derived from laboratory experiments generally have a narrow field of validity, and that the uncertainty of the Kd outside this field increases significantly. Mechanistic multicomponent geochemical simulators can be used to calculate Kd values under a wide range of conditions. This approach is powerful and flexible, but requires expert knowledge on the part of the user. The work presented in this paper aims to develop a simplified approach of estimating Kd values whose level of accuracy would be comparable with those obtained by fully-fledged geochemical simulators. The proposed approach consists of deriving simplified algebraic expressions by combining relevant mass action equations. This approach was applied to three distinct geochemical systems involving surface complexation and ion-exchange processes. Within bounds imposed by model simplifications, the presented approach allows radionuclide Kd values to be estimated as a function of key system-controlling parameters, such as the pH and mineralogy. This approach could be used by PA professionals to assess the impact of key geochemical parameters on the variability of radionuclide Kd values. Moreover, the presented approach could be relatively easily implemented in existing codes to represent the influence of temporal and spatial changes in geochemistry on Kd values. Copyright © 2015 Elsevier B.V. All rights reserved.
Application of lab derived kinetic biodegradation parameters at the field scale
NASA Astrophysics Data System (ADS)
Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.
2003-04-01
Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.
Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes
NASA Astrophysics Data System (ADS)
Hirsch, Damian; Gharib, Morteza
2016-11-01
Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.
Paths to future growth in photovoltaics manufacturing
Basore, Paul A.
2016-03-01
The past decade has seen rapid growth in the photovoltaics industry, followed in the past few years by a period of much slower growth. A simple model that is consistent with this historical record can be used to predict the future evolution of the industry. Two key parameters are identified that determine the outcome. One is the annual global investment in manufacturing capacity normalized to the manufacturing capacity for the previous year (capacity-normalized capital investment rate, CapIR, units dollar/W). The other is how much capital investment is required for each watt of annual manufacturing capacity, normalized to the service lifemore » of the assets (capacity-normalized capital demand rate, CapDR, units dollar/W). If these two parameters remain unchanged from the values they have held for the past few years, global manufacturing capacity will peak in the next few years and then decline. However, it only takes a modest improvement in CapIR to ensure future growth in photovoltaics. Here, several approaches are presented that can enable the required improvement in CapIR. If, in addition, there is an accompanying improvement in CapDR, the rate of growth can be substantially accelerated.« less
A Procedure to Measure the in-Situ Hygrothermal Behavior of Earth Walls
Chabriac, Pierre-Antoine; Fabbri, Antonin; Morel, Jean-Claude; Laurent, Jean-Paul; Blanc-Gonnet, Joachim
2014-01-01
Rammed earth is a sustainable material with low embodied energy. However, its development as a building material requires a better evaluation of its moisture-thermal buffering abilities and its mechanical behavior. Both of these properties are known to strongly depend on the amount of water contained in wall pores and its evolution. Thus the aim of this paper is to present a procedure to measure this key parameter in rammed earth or cob walls by using two types of probes operating on the Time Domain Reflectometry (TDR) principle. A calibration procedure for the probes requiring solely four parameters is described. This calibration procedure is then used to monitor the hygrothermal behavior of a rammed earth wall (1.5 m × 1 m × 0.5 m), instrumented by six probes during its manufacture, and submitted to insulated, natural convection and forced convection conditions. These measurements underline the robustness of the calibration procedure over a large range of water content, even if the wall is submitted to quite important temperature variations. They also emphasize the importance of gravity on water content heterogeneity when the saturation is high, as well as the role of liquid-to-vapor phase change on the thermal behavior. PMID:28788603
Strategies for Optimal MAC Parameters Tuning in IEEE 802.15.6 Wearable Wireless Sensor Networks.
Alam, Muhammad Mahtab; Ben Hamida, Elyes
2015-09-01
Wireless body area networks (WBAN) has penetrated immensely in revolutionizing the classical heath-care system. Recently, number of WBAN applications has emerged which introduce potential limits to existing solutions. In particular, IEEE 802.15.6 standard has provided great flexibility, provisions and capabilities to deal emerging applications. In this paper, we investigate the application-specific throughput analysis by fine-tuning the physical (PHY) and medium access control (MAC) parameters of the IEEE 802.15.6 standard. Based on PHY characterizations in narrow band, at the MAC layer, carrier sense multiple access collision avoidance (CSMA/CA) and scheduled access protocols are extensively analyzed. It is concluded that, IEEE 802.15.6 standard can satisfy most of the WBANs applications throughput requirements by maximum achieving 680 Kbps. However, those emerging applications which require high quality audio or video transmissions, standard is not able to meet their constraints. Moreover, delay, energy efficiency and successful packet reception are considered as key performance metrics for comparing the MAC protocols. CSMA/CA protocol provides the best results to meet the delay constraints of medical and non-medical WBAN applications. Whereas, the scheduled access approach, performs very well both in energy efficiency and packet reception ratio.
Jimenez, Julie; Gonidec, Estelle; Cacho Rivero, Jesús Andrés; Latrille, Eric; Vedrenne, Fabien; Steyer, Jean-Philippe
2014-03-01
Advanced dynamic anaerobic digestion models, such as ADM1, require both detailed organic matter characterisation and intimate knowledge of the involved metabolic pathways. In the current study, a methodology for municipal sludge characterization is investigated to describe two key parameters: biodegradability and bioaccessibility of organic matter. The methodology is based on coupling sequential chemical extractions with 3D fluorescence spectroscopy. The use of increasingly strong solvents reveals different levels of organic matter accessibility and the spectroscopy measurement leads to a detailed characterisation of the organic matter. The results obtained from testing 52 municipal sludge samples (primary, secondary, digested and thermally treated) showed a successful correlation with sludge biodegradability and bioaccessibility. The two parameters, traditionally obtained through the biochemical methane potential (BMP) lab tests, are now obtain in only 5 days compared to the 30-60 days usually required. Experimental data, obtained from two different laboratory scale reactors, were used to validate the ADM1 model. The proposed approach showed a strong application potential for reactor design and advanced control of anaerobic digestion processes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Innovative hyperchaotic encryption algorithm for compressed video
NASA Astrophysics Data System (ADS)
Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang
2002-12-01
It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.
Analysis and design of a genetic circuit for dynamic metabolic engineering.
Anesiadis, Nikolaos; Kobayashi, Hideki; Cluett, William R; Mahadevan, Radhakrishnan
2013-08-16
Recent advances in synthetic biology have equipped us with new tools for bioprocess optimization at the genetic level. Previously, we have presented an integrated in silico design for the dynamic control of gene expression based on a density-sensing unit and a genetic toggle switch. In the present paper, analysis of a serine-producing Escherichia coli mutant shows that an instantaneous ON-OFF switch leads to a maximum theoretical productivity improvement of 29.6% compared to the mutant. To further the design, global sensitivity analysis is applied here to a mathematical model of serine production in E. coli coupled with a genetic circuit. The model of the quorum sensing and the toggle switch involves 13 parameters of which 3 are identified as having a significant effect on serine concentration. Simulations conducted in this reduced parameter space further identified the optimal ranges for these 3 key parameters to achieve productivity values close to the maximum theoretical values. This analysis can now be used to guide the experimental implementation of a dynamic metabolic engineering strategy and reduce the time required to design the genetic circuit components.
Martel, D; Guerra, A; Turek, P; Weiss, J; Vileno, B
2016-04-01
In the field of solar fuel cells, the development of efficient photo-converting semiconductors remains a major challenge. A rational analysis of experimental photocatalytic results obtained with material in colloïdal suspensions is needed to access fundamental knowledge required to improve the design and properties of new materials. In this study, a simple system electron donor/nano-TiO2 is considered and examined via spin scavenging electron paramagnetic resonance as well as a panel of analytical techniques (composition, optical spectroscopy and dynamic light scattering) for selected type of nano-TiO2. Independent variables (pH, electron donor concentration and TiO2 amount) have been varied and interdependent variables (aggregate size, aggregate surface vs. volume and acid/base groups distribution) are discussed. This work shows that reliable understanding involves thoughtful combination of interdependent parameters, whereas the specific surface area seems not a pertinent parameter. The conclusion emphasizes the difficulty to identify the key features of the mechanisms governing photocatalytic properties in nano-TiO2. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Viskontas, K.; Rusteika, N.
2016-09-01
Semiconductor saturable absorber mirror (SESAM) is the key component for many passively mode-locked ultrafast laser sources. Particular set of nonlinear parameters is required to achieve self-starting mode-locking or avoid undesirable q-switch mode-locking for the ultra-short pulse laser. In this paper, we introduce a novel all-fiber wavelength-tunable picosecond pulse duration setup for the measurement of nonlinear properties of saturable absorber mirrors at around 1 μm center wavelength. The main advantage of an all-fiber configuration is the simplicity of measuring the fiber-integrated or fiber-pigtailed saturable absorbers. A tunable picosecond fiber laser enables to investigate the nonlinear parameters at different wavelengths in ultrafast regime. To verify the capability of the setup, nonlinear parameters for different SESAMs with low and high modulation depth were measured. In the operating wavelength range 1020-1074 nm, <1% absolute nonlinear reflectivity accuracy was demonstrated. Achieved fluence range was from 100 nJ/cm2 to 2 mJ/cm2 with corresponding intensity from 10 kW/cm2 to 300 MW/cm2.
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
On Using Intensity Interferometry for Feature Identification and Imaging of Remote Objects
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Strekalov, Dmitry V.; Yu, Nan
2013-01-01
We derive an approximation to the intensity covariance function of two scanning pinhole detectors, facing a distant source (e.g., a star) being occluded partially by an absorptive object (e.g., a planet). We focus on using this technique to identify or image an object that is in the line-of-sight between a well-characterized source and the detectors. We derive the observed perturbation to the intensity covariance map due to the object, showing that under some reasonable approximations it is proportional to the real part of the Fourier transform of the source's photon-flux density times the Fourier transform of the object's intensity absorption. We highlight the key parameters impacting its visibility and discuss the requirements for estimating object-related parameters, e.g., its size, velocity or shape. We consider an application of this result to determining the orbit inclination of an exoplanet orbiting a distant star. Finally, motivated by the intrinsically weak nature of the signature, we study its signal-to-noise ratio and determine the impact of system parameters.
A Cost-Effective Approach to Optimizing Microstructure and Magnetic Properties in Ce17Fe78B₆ Alloys.
Tan, Xiaohua; Li, Heyun; Xu, Hui; Han, Ke; Li, Weidan; Zhang, Fang
2017-07-28
Optimizing fabrication parameters for rapid solidification of Re-Fe-B (Re = Rare earth) alloys can lead to nanocrystalline products with hard magnetic properties without any heat-treatment. In this work, we enhanced the magnetic properties of Ce 17 Fe 78 B₆ ribbons by engineering both the microstructure and volume fraction of the Ce₂Fe 14 B phase through optimization of the chamber pressure and the wheel speed necessary for quenching the liquid. We explored the relationship between these two parameters (chamber pressure and wheel speed), and proposed an approach to identifying the experimental conditions most likely to yield homogenous microstructure and reproducible magnetic properties. Optimized experimental conditions resulted in a microstructure with homogeneously dispersed Ce₂Fe 14 B and CeFe₂ nanocrystals. The best magnetic properties were obtained at a chamber pressure of 0.05 MPa and a wheel speed of 15 m·s -1 . Without the conventional heat-treatment that is usually required, key magnetic properties were maximized by optimization processing parameters in rapid solidification of magnetic materials in a cost-effective manner.
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Implementation of a numerical holding furnace model in foundry and construction of a reduced model
NASA Astrophysics Data System (ADS)
Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane
2016-09-01
Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts in according to geometrical and structural expectations. The definition of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. In a further stage this model will be used to characterize heat exchanges using internal sensors through inverse techniques to optimize the furnace command and the optimization of its design. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. A detailed model allows the calculation of the internal induction heat source as well as transient radiative transfer inside the furnace. A reduced lumped body model has been defined to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm with Matlab, using two synthetic temperature signals with a further validation test.
Hybrid parameter identification of a multi-modal underwater soft robot.
Giorgio-Serchi, F; Arienti, A; Corucci, F; Giorelli, M; Laschi, C
2017-02-28
We introduce an octopus-inspired, underwater, soft-bodied robot capable of performing waterborne pulsed-jet propulsion and benthic legged-locomotion. This vehicle consists for as much as 80% of its volume of rubber-like materials so that structural flexibility is exploited as a key element during both modes of locomotion. The high bodily softness, the unconventional morphology and the non-stationary nature of its propulsion mechanisms require dynamic characterization of this robot to be dealt with by ad hoc techniques. We perform parameter identification by resorting to a hybrid optimization approach where the characterization of the dual ambulatory strategies of the robot is performed in a segregated fashion. A least squares-based method coupled with a genetic algorithm-based method is employed for the swimming and the crawling phases, respectively. The outcomes bring evidence that compartmentalized parameter identification represents a viable protocol for multi-modal vehicles characterization. However, the use of static thrust recordings as the input signal in the dynamic determination of shape-changing self-propelled vehicles is responsible for the critical underestimation of the quadratic drag coefficient.
NASA Astrophysics Data System (ADS)
Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.
2018-05-01
Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.
Cis-Lunar Reusable In-Space Transportation Architecture for the Evolvable Mars Campaign
NASA Technical Reports Server (NTRS)
McVay, Eric S.; Jones, Christopher A.; Merrill, Raymond G.
2016-01-01
Human exploration missions to Mars or other destinations in the solar system require large quantities of propellant to enable the transportation of required elements from Earth's sphere of influence to Mars. Current and proposed launch vehicles are incapable of launching all of the requisite mass on a single vehicle; hence, multiple launches and in-space aggregation are required to perform a Mars mission. This study examines the potential of reusable chemical propulsion stages based in cis-lunar space to meet the transportation objectives of the Evolvable Mars Campaign and identifies cis-lunar propellant supply requirements. These stages could be supplied with fuel and oxidizer delivered to cis-lunar space, either launched from Earth or other inner solar system sources such as the Moon or near Earth asteroids. The effects of uncertainty in the model parameters are evaluated through sensitivity analysis of key parameters including the liquid propellant combination, inert mass fraction of the vehicle, change in velocity margin, and change in payload masses. The outcomes of this research include a description of the transportation elements, the architecture that they enable, and an option for a campaign that meets the objectives of the Evolvable Mars Campaign. This provides a more complete understanding of the propellant requirements, as a function of time, that must be delivered to cis-lunar space. Over the selected sensitivity ranges for the current payload and schedule requirements of the 2016 point of departure of the Evolvable Mars Campaign destination systems, the resulting propellant delivery quantities are between 34 and 61 tonnes per year of hydrogen and oxygen propellant, or between 53 and 76 tonnes per year of methane and oxygen propellant, or between 74 and 92 tonnes per year of hypergolic propellant. These estimates can guide future propellant manufacture and/or delivery architectural analysis.
NASA Astrophysics Data System (ADS)
Nicholl, Matt; Guillochon, James; Berger, Edo
2017-11-01
We use the new Modular Open Source Fitter for Transients to model 38 hydrogen-poor superluminous supernovae (SLSNe). We fit their multicolor light curves with a magnetar spin-down model and present posterior distributions of magnetar and ejecta parameters. The color evolution can be fit with a simple absorbed blackbody. The medians (1σ ranges) for key parameters are spin period 2.4 ms (1.2-4 ms), magnetic field 0.8× {10}14 G (0.2{--}1.8× {10}14 G), ejecta mass 4.8 {M}⊙ (2.2-12.9 {M}⊙ ), and kinetic energy 3.9× {10}51 erg (1.9{--}9.8× {10}51 erg). This significantly narrows the parameter space compared to our uninformed priors, showing that although the magnetar model is flexible, the parameter space relevant to SLSNe is well constrained by existing data. The requirement that the instantaneous engine power is ˜1044 erg at the light-curve peak necessitates either large rotational energy (P < 2 ms), or more commonly that the spin-down and diffusion timescales be well matched. We find no evidence for separate populations of fast- and slow-declining SLSNe, which instead form a continuum in light-curve widths and inferred parameters. Variations in the spectra are explained through differences in spin-down power and photospheric radii at maximum light. We find no significant correlations between model parameters and host galaxy properties. Comparing our posteriors to stellar evolution models, we show that SLSNe require rapidly rotating (fastest 10%) massive stars (≳ 20 {M}⊙ ), which is consistent with their observed rate. High mass, low metallicity, and likely binary interaction all serve to maintain rapid rotation essential for magnetar formation. By reproducing the full set of light curves, our posteriors can inform photometric searches for SLSNe in future surveys.
Optimum allocation of test resources and comparison of breeding strategies for hybrid wheat.
Longin, C Friedrich H; Mi, Xuefei; Melchinger, Albrecht E; Reif, Jochen C; Würschum, Tobias
2014-10-01
The use of a breeding strategy combining the evaluation of line per se with testcross performance maximizes annual selection gain for hybrid wheat breeding. Recent experimental studies confirmed a high commercial potential for hybrid wheat requiring the design of optimum breeding strategies. Our objectives were to (1) determine the optimum allocation of the type and number of testers, the number of test locations and the number of doubled haploid lines for different breeding strategies, (2) identify the best breeding strategy and (3) elaborate key parameters for an efficient hybrid wheat breeding program. We performed model calculations using the selection gain for grain yield as target variable to optimize the number of lines, testers and test locations in four different breeding strategies. A breeding strategy (BS2) combining the evaluation of line per se performance and general combining ability (GCA) had a far larger annual selection gain across all considered scenarios than a breeding strategy (BS1) focusing only on GCA. In the combined strategy, the production of testcross seed conducted in parallel with the first yield trial for line per se performance (BS2rapid) resulted in a further increase of the annual selection gain. For the current situation in hybrid wheat, this relative superiority of the strategy BS2rapid amounted to 67 % in annual selection gain compared to BS1. Varying a large number of parameters, we identified the high costs for hybrid seed production and the low variance of GCA in hybrid wheat breeding as key parameters limiting selection gain in BS2rapid.
Danon-Schaffer, Monica N; Mahecha-Botero, Andrés; Grace, John R; Ikonomou, Michael
2013-09-01
Previous research on brominated flame retardants (BFRs), including polybrominated diphenyl ethers (PBDEs) has largely focussed on their concentrations in the environment and their adverse effects on human health. This paper explores their transfer from waste streams to water and soil. A comprehensive mass balance model is developed to track polybrominated diphenyl ethers (PBDEs), originating from e-waste and non-e-waste solids leaching from a landfill. Stepwise debromination is assumed to occur in three sub-systems (e-waste, aqueous leachate phase, and non-e-waste solids). Analysis of landfill samples and laboratory results from a solid-liquid contacting chamber are used to estimate model parameters to simulate an urban landfill system, for past and future scenarios. Sensitivity tests to key model parameters were conducted. Lower BDEs require more time to disappear than high-molecular weight PBDEs, since debromination takes place in a stepwise manner, according to the simplified reaction scheme. Interphase mass transfer causes the decay pattern to be similar in all three sub-systems. The aqueous phase is predicted to be the first sub-system to eliminate PBDEs if their input to the landfill were to be stopped. The non-e-waste solids would be next, followed by the e-waste sub-system. The model shows that mass transfer is not rate-limiting, but the evolution over time depends on the kinetic degradation parameters. Experimental scatter makes model testing difficult. Nevertheless, the model provides qualitative understanding of the influence of key variables. Copyright © 2013 Elsevier B.V. All rights reserved.
Design for Natural Breast Augmentation: The ICE Principle.
Mallucci, Patrick; Branford, Olivier Alexandre
2016-06-01
The authors' published studies have helped define breast beauty in outlining key parameters that contribute to breast attractiveness. The "ICE" principle puts design into practice. It is a simplified formula for inframammary fold incision planning as part of the process for determining implant selection and placement to reproduce the 45:55 ratio previously described as fundamental to natural breast appearance. The formula is as follows: implant dimensions (I) - capacity of the breast (C) = excess tissue required (E). The aim of this study was to test the accuracy of the ICE principle for producing consistent natural beautiful results in breast augmentation. A prospective analysis of 50 consecutive women undergoing primary breast augmentation by means of an inframammary fold incision with anatomical or round implants was performed. The ICE principle was applied to all cases to determine implant selection, placement, and incision position. Changes in parameters between preoperative and postoperative digital clinical photographs were analyzed. The mean upper pole-to-lower pole ratio changed from 52:48 preoperatively to 45:55 postoperatively (p < 0.0001). Mean nipple angulation was also statistically significantly elevated from 11 degrees to 19 degrees skyward (p ≤ 0.0005). Accuracy of incision placement in the fold was 99.7 percent on the right and 99.6 percent on the left, with a standard error of only 0.2 percent. There was a reduction in variability for all key parameters. The authors have shown using the simple ICE principle for surgical planning in breast augmentation that attractive natural breasts may be achieved consistently and with precision. Therapeutic, IV.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L.
2014-12-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.
21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Requirements for storing and using a private key... Digital Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for... and private key. (b) The certificate holder must provide FIPS-approved secure storage for the private...
Field spectrometer (S191H) preprocessor tape quality test program design document
NASA Technical Reports Server (NTRS)
Campbell, H. M.
1976-01-01
Program QA191H performs quality assurance tests on field spectrometer data recorded on 9-track magnetic tape. The quality testing involves the comparison of key housekeeping and data parameters with historic and predetermined tolerance limits. Samples of key parameters are processed during the calibration period and wavelength cal period, and the results are printed out and recorded on an historical file tape.
Forecast of the general aviation air traffic control environment for the 1980's
NASA Technical Reports Server (NTRS)
Hoffman, W. C.; Hollister, W. M.
1976-01-01
The critical information required for the design of a reliable, low cost, advanced avionics system which would enhance the safety and utility of general aviation is stipulated. Sufficient data is accumulated upon which industry can base the design of a reasonably priced system having the capability required by general aviation in and beyond the 1980's. The key features of the Air Traffic Control (ATC) system are: a discrete address beacon system, a separation assurance system, area navigation, a microwave landing system, upgraded ATC automation, airport surface traffic control, a wake vortex avoidance system, flight service stations, and aeronautical satellites. The critical parameters that are necessary for component design are identified. The four primary functions of ATC (control, surveillance, navigation, and communication) and their impact on the onboard avionics system design are assessed.
Mascharak, Shamik; Benitez, Patrick L.; Proctor, Amy C.; Madl, Christopher M.; Hu, Kenneth H.; Dewi, Ruby E.; Butte, Manish J.; Heilshorn, Sarah C.
2017-01-01
Native vascular extracellular matrices (vECM) consist of elastic fibers that impart varied topographical properties, yet most in vitro models designed to study the effects of topography on cell behavior are not representative of native architecture. Here, we engineer an electrospun elastin-like protein (ELP) system with independently tunable, vECM-mimetic topography and demonstrate that increasing topographical variation causes loss of endothelial cell-cell junction organization. This loss of VE-cadherin signaling and increased cytoskeletal contractility on more topographically varied ELP substrates in turn promote YAP activation and nuclear translocation, resulting in significantly increased endothelial cell migration and proliferation. Our findings identify YAP as a required signaling factor through which fibrous substrate topography influences cell behavior and highlights topography as a key design parameter for engineered biomaterials. PMID:27889666
Application of target costing in machining
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Bhaskaran; Kokatnur, Ameet; Gupta, Deepak P.
2004-11-01
In today's intensely competitive and highly volatile business environment, consistent development of low cost and high quality products meeting the functionality requirements is a key to a company's survival. Companies continuously strive to reduce the costs while still producing quality products to stay ahead in the competition. Many companies have turned to target costing to achieve this objective. Target costing is a structured approach to determine the cost at which a proposed product, meeting the quality and functionality requirements, must be produced in order to generate the desired profits. It subtracts the desired profit margin from the company's selling price to establish the manufacturing cost of the product. Extensive literature review revealed that companies in automotive, electronic and process industries have reaped the benefits of target costing. However target costing approach has not been applied in the machining industry, but other techniques based on Geometric Programming, Goal Programming, and Lagrange Multiplier have been proposed for application in this industry. These models follow a forward approach, by first selecting a set of machining parameters, and then determining the machining cost. Hence in this study we have developed an algorithm to apply the concepts of target costing, which is a backward approach that selects the machining parameters based on the required machining costs, and is therefore more suitable for practical applications in process improvement and cost reduction. A target costing model was developed for turning operation and was successfully validated using practical data.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
NASA Astrophysics Data System (ADS)
Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan
2014-12-01
Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.
The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks
NASA Astrophysics Data System (ADS)
Ristenpart, Thomas; Yilek, Scott
Multiparty signature protocols need protection against rogue-key attacks, made possible whenever an adversary can choose its public key(s) arbitrarily. For many schemes, provable security has only been established under the knowledge of secret key (KOSK) assumption where the adversary is required to reveal the secret keys it utilizes. In practice, certifying authorities rarely require the strong proofs of knowledge of secret keys required to substantiate the KOSK assumption. Instead, proofs of possession (POPs) are required and can be as simple as just a signature over the certificate request message. We propose a general registered key model, within which we can model both the KOSK assumption and in-use POP protocols. We show that simple POP protocols yield provable security of Boldyreva's multisignature scheme [11], the LOSSW multisignature scheme [28], and a 2-user ring signature scheme due to Bender, Katz, and Morselli [10]. Our results are the first to provide formal evidence that POPs can stop rogue-key attacks.
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
Thornley, John H. M.
2011-01-01
Background and Aims Plant growth and respiration still has unresolved issues, examined here using a model. The aims of this work are to compare the model's predictions with McCree's observation-based respiration equation which led to the ‘growth respiration/maintenance respiration paradigm’ (GMRP) – this is required to give the model credibility; to clarify the nature of maintenance respiration (MR) using a model which does not represent MR explicitly; and to examine algebraic and numerical predictions for the respiration:photosynthesis ratio. Methods A two-state variable growth model is constructed, with structure and substrate, applicable on plant to ecosystem scales. Four processes are represented: photosynthesis, growth with growth respiration (GR), senescence giving a flux towards litter, and a recycling of some of this flux. There are four significant parameters: growth efficiency, rate constants for substrate utilization and structure senescence, and fraction of structure returned to the substrate pool. Key Results The model can simulate McCree's data on respiration, providing an alternative interpretation to the GMRP. The model's parameters are related to parameters used in this paradigm. MR is defined and calculated in terms of the model's parameters in two ways: first during exponential growth at zero growth rate; and secondly at equilibrium. The approaches concur. The equilibrium respiration:photosynthesis ratio has the value of 0·4, depending only on growth efficiency and recycling fraction. Conclusions McCree's equation is an approximation that the model can describe; it is mistaken to interpret his second coefficient as a maintenance requirement. An MR rate is defined and extracted algebraically from the model. MR as a specific process is not required and may be replaced with an approach from which an MR rate emerges. The model suggests that the respiration:photosynthesis ratio is conservative because it depends on two parameters only whose values are likely to be similar across ecosystems. PMID:21948663
Channel-parameter estimation for satellite-to-submarine continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Guo, Ying; Xie, Cailang; Huang, Peng; Li, Jiawei; Zhang, Ling; Huang, Duan; Zeng, Guihua
2018-05-01
This paper deals with a channel-parameter estimation for continuous-variable quantum key distribution (CV-QKD) over a satellite-to-submarine link. In particular, we focus on the channel transmittances and the excess noise which are affected by atmospheric turbulence, surface roughness, zenith angle of the satellite, wind speed, submarine depth, etc. The estimation method is based on proposed algorithms and is applied to low-Earth orbits using the Monte Carlo approach. For light at 550 nm with a repetition frequency of 1 MHz, the effects of the estimated parameters on the performance of the CV-QKD system are assessed by a simulation by comparing the secret key bit rate in the daytime and at night. Our results show the feasibility of satellite-to-submarine CV-QKD, providing an unconditionally secure approach to achieve global networks for underwater communications.
Systems engineering implementation in the preliminary design phase of the Giant Magellan Telescope
NASA Astrophysics Data System (ADS)
Maiten, J.; Johns, M.; Trancho, G.; Sawyer, D.; Mady, P.
2012-09-01
Like many telescope projects today, the 24.5-meter Giant Magellan Telescope (GMT) is truly a complex system. The primary and secondary mirrors of the GMT are segmented and actuated to support two operating modes: natural seeing and adaptive optics. GMT is a general-purpose telescope supporting multiple science instruments operated in those modes. GMT is a large, diverse collaboration and development includes geographically distributed teams. The need to implement good systems engineering processes for managing the development of systems like GMT becomes imperative. The management of the requirements flow down from the science requirements to the component level requirements is an inherently difficult task in itself. The interfaces must also be negotiated so that the interactions between subsystems and assemblies are well defined and controlled. This paper will provide an overview of the systems engineering processes and tools implemented for the GMT project during the preliminary design phase. This will include requirements management, documentation and configuration control, interface development and technical risk management. Because of the complexity of the GMT system and the distributed team, using web-accessible tools for collaboration is vital. To accomplish this GMTO has selected three tools: Cognition Cockpit, Xerox Docushare, and Solidworks Enterprise Product Data Management (EPDM). Key to this is the use of Cockpit for managing and documenting the product tree, architecture, error budget, requirements, interfaces, and risks. Additionally, drawing management is accomplished using an EPDM vault. Docushare, a documentation and configuration management tool is used to manage workflow of documents and drawings for the GMT project. These tools electronically facilitate collaboration in real time, enabling the GMT team to track, trace and report on key project metrics and design parameters.
NASA Astrophysics Data System (ADS)
Malaguti, G.; Pareschi, G.; Ferrando, P.; Caroli, E.; Di Cocco, G.; Foschini, L.; Basso, S.; Del Sordo, S.; Fiore, F.; Bonati, A.; Lesci, G.; Poulsen, J. M.; Monzani, F.; Stevoli, A.; Negri, B.
2005-08-01
The 10-100 keV region of the electromagnetic spectrum contains the potential for a dramatic improvement in our understanding of a number of key problems in high energy astrophysics. A deep inspection of the universe in this band is on the other hand still lacking because of the demanding sensitivity (fraction of μCrab in the 20-40 keV for 1 Ms integration time) and imaging (≈ 15" angular resolution) requirements. The mission ideas currently being proposed are based on long focal length, grazing incidence, multi-layer optics, coupled with focal plane detectors with few hundreds μm spatial resolution capability. The required large focal lengths, ranging between 8 and 50 m, can be realized by means of extendable optical benches (as foreseen e.g. for the HEXITSAT, NEXT and NuSTAR missions) or formation flight scenarios (e.g. Simbol-X and XEUS). While the final telescope design will require a detailed trade-off analysis between all the relevant parameters (focal length, plate scale value, angular resolution, field of view, detector size, and sensitivity degradation due to detector dead area and telescope vignetting), extreme attention must be dedicated to the background minimization. In this respect, key issues are represented by the passive baffling system, which in case of large focal lengths requires particular design assessments, and by the active/passive shielding geometries and materials. In this work, the result of a study of the expected background for a hard X-ray telescope is presented, and its implication on the required sensitivity, together with the possible implementation design concepts for active and passive shielding in the framework of future satellite missions, are discussed.
Parameter as a Switch Between Dynamical States of a Network in Population Decoding.
Yu, Jiali; Mao, Hua; Yi, Zhang
2017-04-01
Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.
Controlling Ethylene for Extended Preservation of Fresh Fruits and Vegetables
2008-12-01
into a process simulation to determine the effects of key design parameters on the overall performance of the system. Integrating process simulation...High Decay [Asian Pears High High Decay [ Avocados High High Decay lBananas Moderate ~igh Decay Cantaloupe High Moderate Decay Cherimoya Very High High...ozonolysis. Process simulation was subsequently used to understand the effect of key system parameters on EEU performance. Using this modeling work
Ba, Kamarel; Thiaw, Modou; Lazar, Najih; Sarr, Alassane; Brochier, Timothée; Ndiaye, Ismaïla; Faye, Alioune; Sadio, Oumar; Panfili, Jacques; Thiaw, Omar Thiom; Brehmer, Patrice
2016-01-01
The stock of the Senegalese flat sardinella, Sardinella maderensis, is highly exploited in Senegal, West Africa. Its growth and reproduction parameters are key biological indicators for improving fisheries management. This study reviewed these parameters using landing data from small-scale fisheries in Senegal and literature information dated back more than 25 years. Age was estimated using length-frequency data to calculate growth parameters and assess the growth performance index. With global climate change there has been an increase in the average sea surface temperature along the Senegalese coast but the length-weight parameters, sex ratio, size at first sexual maturity, period of reproduction and condition factor of S. maderensis have not changed significantly. The above parameters of S. maderensis have hardly changed, despite high exploitation and fluctuations in environmental conditions that affect the early development phases of small pelagic fish in West Africa. This lack of plasticity of the species regarding of the biological parameters studied should be considered when planning relevant fishery management plans.
Challenges of model transferability to data-scarce regions (Invited)
NASA Astrophysics Data System (ADS)
Samaniego, L. E.
2013-12-01
Developing the ability to globally predict the movement of water on the land surface at spatial scales from 1 to 5 km constitute one of grand challenges in land surface modelling. Copying with this grand challenge implies that land surface models (LSM) should be able to make reliable predictions across locations and/or scales other than those used for parameter estimation. In addition to that, data scarcity and quality impose further difficulties in attaining reliable predictions of water and energy fluxes at the scales of interest. Current computational limitations impose also seriously limitations to exhaustively investigate the parameter space of LSM over large domains (e.g. greater than half a million square kilometers). Addressing these challenges require holistic approaches that integrate the best techniques available for parameter estimation, field measurements and remotely sensed data at their native resolutions. An attempt to systematically address these issues is the multiscale parameterisation technique (MPR) that links high resolution land surface characteristics with effective model parameters. This technique requires a number of pedo-transfer functions and a much fewer global parameters (i.e. coefficients) to be inferred by calibration in gauged basins. The key advantage of this technique is the quasi-scale independence of the global parameters which enables to estimate global parameters at coarser spatial resolutions and then to transfer them to (ungauged) areas and scales of interest. In this study we show the ability of this technique to reproduce the observed water fluxes and states over a wide range of climate and land surface conditions ranging from humid to semiarid and from sparse to dense forested regions. Results of transferability of global model parameters in space (from humid to semi-arid basins) and across scales (from coarser to finer) clearly indicate the robustness of this technique. Simulations with coarse data sets (e.g. EOBS forcing 25x25 km2, FAO soil map 1:5000000) using parameters obtained with high resolution information (REGNIE forcing 1x1 km2, BUEK soil map 1:1000000) in different climatic regions indicate the potential of MPR for prediction in data-scarce regions. In this presentation, we will also discuss how the transferability of global model parameters across scales and locations helps to identify deficiencies in model structure and regionalization functions.
NASA Technical Reports Server (NTRS)
Lehtonen, Kenneth
1994-01-01
The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) International Solar-Terrestrial Physics (ISTP) Program is committed to the development of a comprehensive, multi-mission ground data system which will support a variety of national and international scientific missions in an effort to study the flow of energy from the sun through the Earth-space environment, known as the geospace. A major component of the ISTP ground data system is an ISTP-dedicated Central Data Handling Facility (CDHF). Acquisition, development, and operation of the ISTP CDHF were delegated by the ISTP Project Office within the Flight Projects Directorate to the Information Processing Division (IPD) within the Mission Operations and Data Systems Directorate (MO&DSD). The ISTP CDHF supports the receipt, storage, and electronic access of the full complement of ISTP Level-zero science data; serves as the linchpin for the centralized processing and long-term storage of all key parameters generated either by the ISTP CDHF itself or received from external, ISTP Program approved sources; and provides the required networking and 'science-friendly' interfaces for the ISTP investigators. Once connected to the ISTP CDHF, the online catalog of key parameters can be browsed from their remote processing facilities for the immediate electronic receipt of selected key parameters using the NASA Science Internet (NSI), managed by NASA's Ames Research Center. The purpose of this paper is twofold: (1) to describe how the ISTP CDHF was successfully implemented and operated to support initially the Japanese Geomagnetic Tail (GEOTAIL) mission and correlative science investigations, and (2) to describe how the ISTP CDHF has been enhanced to support ongoing as well as future ISTP missions. Emphasis will be placed on how various project management approaches were undertaken that proved to be highly effective in delivering an operational ISTP CDHF to the Project on schedule and within budget. Examples to be discussed include: the development of superior teams; the use of Defect Causal Analysis (DCA) concepts to improve the software development process in a pilot Total Quality Management (TQM) initiative; and the implementation of a robust architecture that will be able to support the anticipated growth in the ISTP Program science requirements with only incremental upgrades to the baseline system. Further examples include the use of automated data management software and the implementation of Government and/or industry standards, whenever possible, into the hardware and software development life-cycle. Finally, the paper will also report on several new technologies (for example, the installation of a Fiber Data Distribution Interface network) that were successfully employed.
Textile Technologies and Tissue Engineering: A Path Towards Organ Weaving
Akbari, Mohsen; Tamayol, Ali; Bagherifard, Sara; Serex, Ludovic; Mostafalu, Pooria; Faramarzi, Negar; Mohammadi, Mohammad Hossein
2016-01-01
Textile technologies have recently attracted great attention as potential biofabrication tools for engineering tissue constructs. Using current textile technologies, fibrous structures can be designed and engineered to attain the required properties that are demanded by different tissue engineering applications. Several key parameters such as physiochemical characteristics of fibers, pore size and mechanical properties of the fabrics play important role in the effective use of textile technologies in tissue engineering. This review summarizes the current advances in the manufacturing of biofunctional fibers. Different textile methods such as knitting, weaving, and braiding are discussed and their current applications in tissue engineering are highlighted. PMID:26924450
Operations analysis (study 2.1): Shuttle upper stage software requirements
NASA Technical Reports Server (NTRS)
Wolfe, R. R.
1974-01-01
An investigation of software costs related to space shuttle upper stage operations with emphasis on the additional costs attributable to space servicing was conducted. The questions and problem areas include the following: (1) the key parameters involved with software costs; (2) historical data for extrapolation of future costs; (3) elements of the basic software development effort that are applicable to servicing functions; (4) effect of multiple servicing on complexity of the operation; and (5) are recurring software costs significant. The results address these questions and provide a foundation for estimating software costs based on the costs of similar programs and a series of empirical factors.
180 MW/180 KW pulse modulator for S-band klystron of LUE-200 linac of IREN installation of JINR
NASA Astrophysics Data System (ADS)
Su, Kim Dong; Sumbaev, A. P.; Shvetsov, V. N.
2014-09-01
The offer on working out of the pulse modulator with 180 MW pulse power and 180 kW average power for pulse S-band klystrons of LUE-200 linac of IREN installation at the Laboratory of neutron physics (FLNP) at JINR is formulated. Main requirements, key parameters and element base of the modulator are presented. The variant of the basic scheme on the basis of 14 (or 11) stage 2 parallel PFN with the thyratron switchboard (TGI2-10K/50) and six parallel high voltage power supplies (CCPS Power Supply) is considered.
Studies on possible propagation of microbial contamination in planetary clouds
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Chatigny, M. A.
1973-01-01
Current U.S. planetary quarantine standards based on international agreements require consideration of the probability of contamination (Pc) of the outer planets, Venus, Jupiter, Saturn, etc. One of the key parameters in estimation of the Pc of these planets is the probability of growth (Pg) of terrestrial microorganisms on or near these planets. For example, Jupiter and Saturn appear to have an atmosphere in which some microbial species could metabolize and propagate. This study includes investigation of the likelihood of metabolism and propagation of microbes suspended in dynamic atmospheres. It is directed toward providing experimental information needed to aid in rational estimation of Pg for these outer plants.
Optimizing Oxygenation in the Mechanically Ventilated Patient: Nursing Practice Implications.
Barton, Glenn; Vanderspank-Wright, Brandi; Shea, Jacqueline
2016-12-01
Critical care nurses constitute front-line care provision for patients in the intensive care unit (ICU). Hypoxemic respiratory compromise/failure is a primary reason that patients require ICU admission and mechanical ventilation. Critical care nurses must possess advanced knowledge, skill, and judgment when caring for these patients to ensure that interventions aimed at optimizing oxygenation are both effective and safe. This article discusses fundamental aspects of respiratory physiology and clinical indices used to describe oxygenation status. Key nursing interventions including patient assessment, positioning, pharmacology, and managing hemodynamic parameters are discussed, emphasizing their effects toward mitigating ventilation-perfusion mismatch and optimizing oxygenation. Copyright © 2016 Elsevier Inc. All rights reserved.
Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication
NASA Astrophysics Data System (ADS)
Niaz, M. T.; Imdad, F.; Kim, H. S.
2017-01-01
An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.
Bruzaud, Jérôme; Tarrade, Jeanne; Celia, Elena; Darmanin, Thierry; Taffin de Givenchy, Elisabeth; Guittard, Frédéric; Herry, Jean-Marie; Guilbaud, Morgan; Bellon-Fontaine, Marie-Noëlle
2017-04-01
Reducing bacterial adhesion on substrates is fundamental for various industries. In this work, new superhydrophobic surfaces are created by electrodeposition of hydrophobic polymers (PEDOT-F 4 or PEDOT-H 8 ) on stainless steel with controlled topographical features, especially at a nano-scale. Results show that anti-bioadhesive and anti-biofilm properties require the control of the surface topographical features, and should be associated with a low adhesion of water onto the surface (Cassie-Baxter state) with limited crevice features at the scale of bacterial cells (nano-scale structures). Copyright © 2016. Published by Elsevier B.V.
Lu, Y; Zhang, M
2016-08-20
Objective: To study the applicability, the high frequency used content, the feasibility, and issues needed to be solved of the standard of GBZ 1-2010, aiming to provide technical evidence for the revision of GBZ1. Methods: In the study, the data were collected by referring to the literature database and the questionnaire from June 2013 to June 2015. There were 2 surveys carried out in the study, with methods including questionnaire survey and specific interview. The investigation methods include the paper version of the questionnaire by mail, the electronic version of the questionnaire by e-mail, and the online survey. And 111 questionnaires were collected. Results: In total, the applicability survey (the first survey) received 156 suggestions covering 76 items from 23 facilities, and 13 key technical issues were summarized to be solved as priorities. In the application survey (the second survey) , the leading three jobs using GBZ 1-2010 were the occupational hazards evaluation for constructive project (82.0%) , lecturing/training (65.8%) , occupational hazards monitoring (64.9%) , respectively. The high frequency used contents of GBZ 1-2010 were the sixth part "the basic hygienic requirements for workplace" (90.1%) , the fifth part "site selection, overall layout and workshop design" (87.4%) , the seventh part "the basic hygienic requirements for welfare room" (85.6%) , respectively. In the results of feasibility, scores of the fourth part "the general rules" , the fifth part "site selection, overall layout and workshop design" , the sixth part "the basic hygienic requirements for workplace" , the seventh part "the basic hygienic requirements for welfare room" , the eighth "emergency rescue" , annex A "the correct use instructions" , annex B "buffer zone standards for industrial enterprises" were 2.6, 3.1, 3.5, 3.8, 3.2, 3.3, 2.6, respectively. Among 111 questionnaires, the parts needed to be modified as priories were the fifth part "site selection, overall layout and workshop design" (51.4%) , and the sixth part "the basic hygienic requirements for workplace" (51.4%) . In results of the key technical issues needed to be modified of GBZ 1-2010, the contents needed to be added as priories were the occupational prevention and control requirements of biological factors (51.4%) , technical parameters of dust in workplace (48.7%) , technical parameters of hazardous agents in workplace (46.9%) , the quality and quantity requirements of fresh air (46.0%) , the setting conditions of the emergency rescue station (46.0%) , the hygienic design requirements of joint workshops and the evidence (45.1%) , and requirements for medical emergency rescue personnel equipped and qualified (45.1%) . Conclusion: GBZ 1-2010 is feasible and practical, mainly used by occupational health technical service organizations in the occupational hazards evaluation for constructive project, lecturing/training, occupational hazards monitoring, et al. GBZ 1 plays a directive role in government decision-making, control of constructive projects from the beginning, training and capacity building of occupational health professionals, and the prevention and treatment of occupational diseases in enterprises, needing to strengthen the implementation of GBZ 1. And on the basis of the above key technical issues to be revised, international cooperation and exchanges should be strengthened, so that it is adapted to the development of the modern enterprise system.
NASA Technical Reports Server (NTRS)
Mueller, Carl H.; VanKeuls, Frederick W.; Romanofsky, Robert R.; Alterovitz, Samuel A.; Miranda, Felix A.
2003-01-01
One of the keys to successfully incorporating ferroelectric films into Ku-band (12 to 18 GHz) phase shifters is to establish the composition, microstructure, and thickness required to meet the tuning needs, and tailor the film properties to meet these needs. Optimal performance is obtained when the film composition and device design are such that the device performance is limited by odd mode dielectric losses, and these losses are minimized as much as possible while still maintaining adequate tunability. The parameters required to maintain device performance will vary slightly depending on composition, but we can conclude that the best tuning-to-loss figures of merit (K-factor) are obtained when there is minimal variation between the in-plane and out-of-plane lattice parameters, and the full-width half maximum values of the BSTO (002) peaks are less than approximately 0.04 deg. We have observed that for phase shifters in which the ferroelectric crystalline quality and thickness are almost identical, higher losses are observed in films with higher BaISr ratios. The best performance was observed in phase shifters with Ba:Sr = 30:70. The superiority of this composition was attributed to several interacting factors: the B a: Sr ratio was such that the Curie temperature (180 K) was far removed from room temperature, the crystalline quality of the film was excellent, and there was virtually no difference between the inplane and out-of-plane lattice parameters of the film.
Parameters of Technological Growth
ERIC Educational Resources Information Center
Starr, Chauncey; Rudman, Richard
1973-01-01
Examines the factors involved in technological growth and identifies the key parameters as societal resources and societal expectations. Concludes that quality of life can only be maintained by reducing population growth, since this parameter is the product of material levels, overcrowding, food, and pollution. (JR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim
2014-01-01
To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VICmore » simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.« less
Edelman, Alison B; Cherala, Ganesh; Munar, Myrna Y.; McInnis, Martha; Stanczyk, Frank Z.; Jensen, Jeffrey T
2014-01-01
Objective To determine if increasing the hormone dose or eliminating the hormone-free interval improves key pharmacokinetic (PK) alterations caused by obesity during oral contraceptive (OC) use. Study design Obese (BMI ≥ 30 kg/m2), ovulatory, otherwise healthy, women received an OC containing 20 mcg ethinyl estradiol (EE)/100 mcg levonorgestrel (LNG) dosed cyclically (21 days active pills with 7-day placebo week) for two cycles and then were randomized for two additional cycles to: Continuous Cycling [CC, a dose neutral arm using the same OC with no hormone-free interval] or Increased Dose [ID, a dose escalation arm using an OC containing 30 mcg EE/150 mcg LNG cyclically]. During Cycle 2, 3, and 4, outpatient visits were performed to assess maximum serum concentration (Cmax), area under the curve (AUC0-∞), and time to steady state as well as pharmacodynamics. These key PK parameters were calculated and compared within groups between baseline and treatment cycles. Results A total of 31 women enrolled and completed the study (CC group n = 16; ID group n = 15). Demographics were similar between groups [mean BMI: CC 38kg/m2 (SD 5.1), ID 41kg/m2 (SD 7.6)]. At baseline, the key LNG PK parameters were no different between groups; average time to reach steady-state was 12 days in both groups; Cmax were CC: 3.82 ± 1.28 ng/mL and ID: 3.13 ± 0.87 ng/mL; and AUC0-∞ were CC: 267 ± 115 hr*ng/mL and ID: 199±75 hr*ng/mL. Following randomization, the CC group maintained steady-state serum levels whereas the ID group had a significantly higher Cmax (p< 0.001) but again required 12 days to achieve steady-state. However, AUC was not significantly different between CC (412 ± 255 hr*ng/mL) and ID (283 ± 130 hr*ng/mL). Forty-five percent (14/31) of the study population had evidence of an active follicle-like structure prior to randomization and afterwards this decreased to 9% (3/31). Conclusion Both increasing the OC dose and continuous dosing appear to counteract the impact of obesity on key OC PK parameters. PMID:25070547
Modeling the dynamics of piano keys
NASA Astrophysics Data System (ADS)
Brenon, Celine; Boutillon, Xavier
2003-10-01
The models of piano keys available in the literature are crude: two degrees of freedom and a very few dynamical or geometrical parameters. Experiments on different piano mechanisms (upright, grand, one type of numerical keyboard) exhibit strong differences in the two successive phases of the key motion which are controlled by the finger. Understanding the controllability of the escapement velocity (typically a few percents for professional pianists), the differences between upright and grand pianos, the rationale for the numerous independent adjustments by technicians, and the feel by the pianist require sophisticated modeling. In addition to the inertia of the six independently moving parts of a grand piano mechanism, a careful modeling of friction at pivots and between the jack and the roll, of damping and nonlinearities in felts, and of internal springs will be presented. Simulations will be confronted to the measurements of the motions of the different parts. Currently, the first phase of the motion and the transition to the second phase are well understood while some progress must still be made in order to describe correctly this short but important phase before the escapement of the hammer. [Work done in part at the Laboratory for Musical Acoustics, Paris.
Matschek, Janine; Bullinger, Eric; von Haeseler, Friedrich; Skalej, Martin; Findeisen, Rolf
2017-02-01
Radiofrequency ablation is a valuable tool in the treatment of many diseases, especially cancer. However, controlled heating up to apoptosis of the desired target tissue in complex situations, e.g. in the spine, is challenging and requires experienced interventionalists. For such challenging situations a mathematical model of radiofrequency ablation allows to understand, improve and optimise the outcome of the medical therapy. The main contribution of this work is the derivation of a tailored, yet expandable mathematical model, for the simulation, analysis, planning and control of radiofrequency ablation in complex situations. The dynamic model consists of partial differential equations that describe the potential and temperature distribution during intervention. To account for multipolar operation, time-dependent boundary conditions are introduced. Spatially distributed parameters, like tissue conductivity and blood perfusion, allow to describe the complex 3D environment representing diverse involved tissue types in the spine. To identify the key parameters affecting the prediction quality of the model, the influence of the parameters on the temperature distribution is investigated via a sensitivity analysis. Simulations underpin the quality of the derived model and the analysis approach. The proposed modelling and analysis schemes set the basis for intervention planning, state- and parameter estimation, and control. Copyright © 2016. Published by Elsevier Inc.
Compressed Sensing for Metrics Development
NASA Astrophysics Data System (ADS)
McGraw, R. L.; Giangrande, S. E.; Liu, Y.
2012-12-01
Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.
NASA Astrophysics Data System (ADS)
Petropoulos, George; Wooster, Martin J.; Carlson, Toby N.; Drake, Nick
2010-05-01
Accurate information on spatially explicit distributed estimates of key land-atmosphere fluxes and related land surface parameters is of key importance in a range of disciplines including hydrology, meteorology, agriculture and ecology. Estimation of those parameters from remote sensing frequently employs the integration of such data with mathematical representations of the transfers of energy, mass and radiation between soil, vegetation and atmosphere continuum, known as Soil Vegetation Atmosphere Transfer (SVAT) models. The ability of one such inversion modelling scheme to resolve for key surface energy fluxes and of soil surface moisture content is examined here using data from a multispectral high spatial resolution imaging instrument, the Advanced Spaceborne Thermal Emission and Reflection Scanning Radiometer (ASTER) and SimSphere one-dimensional SVAT model. Accuracy of the investigated methodology, so-called as the "triangle" method, is verified using validated ground observations obtained from selected days collected from nine CARBOEUROPE IP sites representing a variety of climatic, topographic and environmental conditions. Subsequently, a new framework is suggested for the retrieval of two additional parameters by the investigated method, namely the Evaporative (EF) and the Non-Evaporative (NEF) Fractions. Results indicated a close agreement between the inverted surface fluxes and surface moisture availability maps as well as of the EF and NEF parameters with the observations both spatially and temporally with accuracies comparable to those obtained in similar experiments with high spatial resolution data. Inspection of the inverted surface fluxes maps regionally, showed an explainable distribution in the range of the inverted parameters in relation with the surface heterogeneity. Overall performance of the "triangle" inversion methodology was found to be affected predominantly by the SVAT model "correct" initialisation representative of the test site environment, most importantly the atmospheric conditions required in the SVAT model initial conditions. This study represents the first comprehensive evaluation of the performance of this particular methodological implementation at a European setting using the SimSphere SVAT with the ASTER data. The present work is also very timely in that, a variation of this specific inversion methodology has been proposed for the operational retrieval of the soil surface moisture content by National Polar-orbiting Operational Environmental Satellite System (NPOESS), in a series of satellite platforms that are due to be launched in the next 12 years starting from 2012. KEYWORDS: micrometeorology, surface heat fluxes, soil moisture content, ASTER, triangle method, SimSphere, CarboEurope IP
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-03-24
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-01-01
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632
Continuous Variable Quantum Key Distribution Using Polarized Coherent States
NASA Astrophysics Data System (ADS)
Vidiella-Barranco, A.; Borelli, L. F. M.
We discuss a continuous variables method of quantum key distribution employing strongly polarized coherent states of light. The key encoding is performed using the variables known as Stokes parameters, rather than the field quadratures. Their quantum counterpart, the Stokes operators Ŝi (i=1,2,3), constitute a set of non-commuting operators, being the precision of simultaneous measurements of a pair of them limited by an uncertainty-like relation. Alice transmits a conveniently modulated two-mode coherent state, and Bob randomly measures one of the Stokes parameters of the incoming beam. After performing reconciliation and privacy amplification procedures, it is possible to distill a secret common key. We also consider a non-ideal situation, in which coherent states with thermal noise, instead of pure coherent states, are used for encoding.
User's design handbook for a Standardized Control Module (SCM) for DC to DC Converters, volume 2
NASA Technical Reports Server (NTRS)
Lee, F. C.
1980-01-01
A unified design procedure is presented for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt. All key results and performance indices, for buck, boost, and buck/boost switching regulators which are relevant to SCM design considerations are included to facilitate frequent references.
Sensitivity of black carbon concentrations and climate impact to aging and scavenging in OsloCTM2-M7
NASA Astrophysics Data System (ADS)
Lund, Marianne T.; Berntsen, Terje K.; Samset, Bjørn H.
2017-05-01
Accurate representation of black carbon (BC) concentrations in climate models is a key prerequisite for understanding its net climate impact. BC aging and scavenging are treated very differently in current models. Here, we examine the sensitivity of three-dimensional (3-D), temporally resolved BC concentrations to perturbations to individual model processes in the chemistry transport model OsloCTM2-M7. The main goals are to identify processes related to aerosol aging and scavenging where additional observational constraints may most effectively improve model performance, in particular for BC vertical profiles, and to give an indication of how model uncertainties in the BC life cycle propagate into uncertainties in climate impacts. Coupling OsloCTM2 with the microphysical aerosol module M7 allows us to investigate aging processes in more detail than possible with a simpler bulk parameterization. Here we include, for the first time in this model, a treatment of condensation of nitric acid on BC. Using kernels, we also estimate the range of radiative forcing and global surface temperature responses that may result from perturbations to key tunable parameters in the model. We find that BC concentrations in OsloCTM2-M7 are particularly sensitive to convective scavenging and the inclusion of condensation by nitric acid. The largest changes are found at higher altitudes around the Equator and at low altitudes over the Arctic. Convective scavenging of hydrophobic BC, and the amount of sulfate required for BC aging, are found to be key parameters, potentially reducing bias against HIAPER Pole-to-Pole Observations (HIPPO) flight-based measurements by 60 to 90 %. Even for extensive tuning, however, the total impact on global-mean surface temperature is estimated to less than 0.04 K. Similar results are found when nitric acid is allowed to condense on the BC aerosols. We conclude, in line with previous studies, that a shorter atmospheric BC lifetime broadly improves the comparison with measurements over the Pacific. However, we also find that the model-measurement discrepancies can not be uniquely attributed to uncertainties in a single process or parameter. Model development therefore needs to be focused on improvements to individual processes, supported by a broad range of observational and experimental data, rather than tuning of individual, effective parameters such as the global BC lifetime.
NASA Astrophysics Data System (ADS)
Fuchs, Christian; Poulenard, Sylvain; Perlot, Nicolas; Riedi, Jerome; Perdigues, Josep
2017-02-01
Optical satellite communications play an increasingly important role in a number of space applications. However, if the system concept includes optical links to the surface of the Earth, the limited availability due to clouds and other atmospheric impacts need to be considered to give a reliable estimate of the system performance. An OGS network is required for increasing the availability to acceptable figures. In order to realistically estimate the performance and achievable throughput in various scenarios, a simulation tool has been developed under ESA contract. The tool is based on a database of 5 years of cloud data with global coverage and can thus easily simulate different optical ground station network topologies for LEO- and GEO-to-ground links. Further parameters, like e.g. limited availability due to sun blinding and atmospheric turbulence, are considered as well. This paper gives an overview about the simulation tool, the cloud database, as well as the modelling behind the simulation scheme. Several scenarios have been investigated: LEO-to-ground links, GEO feeder links, and GEO relay links. The key results of the optical ground station network optimization and throughput estimations will be presented. The implications of key technical parameters, as e.g. memory size aboard the satellite, will be discussed. Finally, potential system designs for LEO- and GEO-systems will be presented.
Wang, Shirley V; Schneeweiss, Sebastian; Berger, Marc L; Brown, Jeffrey; de Vries, Frank; Douglas, Ian; Gagne, Joshua J; Gini, Rosa; Klungel, Olaf; Mullins, C Daniel; Nguyen, Michael D; Rassen, Jeremy A; Smeeth, Liam; Sturkenboom, Miriam
2017-09-01
Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases. We reviewed key investigator decisions required to operate a sample of macros and software tools designed to create and analyze analytic cohorts from longitudinal streams of healthcare data. A panel of academic, regulatory, and industry experts in healthcare database analytics discussed and added to this list. Evidence generated from large healthcare encounter and reimbursement databases is increasingly being sought by decision-makers. Varied terminology is used around the world for the same concepts. Agreeing on terminology and which parameters from a large catalogue are the most essential to report for replicable research would improve transparency and facilitate assessment of validity. At a minimum, reporting for a database study should provide clarity regarding operational definitions for key temporal anchors and their relation to each other when creating the analytic dataset, accompanied by an attrition table and a design diagram. A substantial improvement in reproducibility, rigor and confidence in real world evidence generated from healthcare databases could be achieved with greater transparency about operational study parameters used to create analytic datasets from longitudinal healthcare databases. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Kathleen Bandt, S; Dacey, Ralph G
2017-09-01
The authors propose a novel bibilometric index, the reverberation index (r-index), as a comparative assessment tool for use in determining differential reverberation between scientific fields for a given scientific entity. Conversely, this may allow comparison of 2 similar scientific entities within a single scientific field. This index is calculated using a relatively simple 3-step process. Briefly, Thompson Reuters' Web of Science is used to produce a citation report for a unique search parameter (this may be an author, journal article, or topical key word). From this citation report, a list of citing journals is retrieved from which a weighted ratio of citation patterns across journals can be calculated. This r-index is then used to compare the reverberation of the original search parameter across different fields of study or wherever a comparison is required. The advantage of this novel tool is its ability to transcend a specific component of the scientific process. This affords application to a diverse range of entities, including an author, a journal article, or a topical key word, for effective comparison of that entity's reverberation within a scientific arena. The authors introduce the context for and applications of the r-index, emphasizing neurosurgical topics and journals for illustration purposes. It should be kept in mind, however, that the r-index is readily applicable across all fields of study.
Data acquisition, remote control and equipment monitoring for ISOLDE RILIS
NASA Astrophysics Data System (ADS)
Rossel, R. E.; Fedosseev, V. N.; Marsh, B. A.; Richter, D.; Rothe, S.; Wendt, K. D. A.
2013-12-01
With a steadily increasing on-line operation time up to a record 3000 h in the year 2012, the Resonance Ionization Laser Ion Source (RILIS) is one of the key components of the ISOLDE on-line isotope user facility at CERN. Ion beam production using the RILIS is essential for many experiments due to the unmatched combination of ionization efficiency and selectivity. To meet the reliability requirements the RILIS is currently operated in shift duty for continuous maintenance of crucial laser parameters such as wavelength, power, beam position and timing, as well as ensuring swift intervention in case of an equipment malfunction. A recent overhaul of the RILIS included the installation of new pump lasers, commercial dye lasers and a complementary, fully solid-state titanium:sapphire laser system. The framework of the upgrade also required the setup of a network-extended, LabVIEW-based system for data acquisition, remote control and equipment monitoring, to support RILIS operators as well as ISOLDE users. The system contributes to four key aspects of RILIS operation: equipment monitoring, machine protection, automated self-reliance, and collaborative data acquisition. The overall concept, technologies used, implementation status and recent applications during the 2012 on-line operation period will be presented along with a summary of future developments.
Neuert, Mark A C; Dunning, Cynthia E
2013-09-01
Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.
NASA Astrophysics Data System (ADS)
Price, M. A.; Murphy, A.; Butterfield, J.; McCool, R.; Fleck, R.
2011-05-01
The predictive methods currently used for material specification, component design and the development of manufacturing processes, need to evolve beyond the current `metal centric' state of the art, if advanced composites are to realise their potential in delivering sustainable transport solutions. There are however, significant technical challenges associated with this process. Deteriorating environmental, political, economic and social conditions across the globe have resulted in unprecedented pressures to improve the operational efficiency of the manufacturing sector generally and to change perceptions regarding the environmental credentials of transport systems in particular. There is a need to apply new technologies and develop new capabilities to ensure commercial sustainability in the face of twenty first century economic and climatic conditions as well as transport market demands. A major technology gap exists between design, analysis and manufacturing processes in both the OEMs, and the smaller companies that make up the SME based supply chain. As regulatory requirements align with environmental needs, manufacturers are increasingly responsible for the broader lifecycle aspects of vehicle performance. These include not only manufacture and supply but disposal and re-use or re-cycling. In order to make advances in the reduction of emissions coupled with improved economic efficiency through the provision of advanced lightweight vehicles, four key challenges are identified as follows: Material systems, Manufacturing systems, Integrated design methods using digital manufacturing tools and Validation systems. This paper presents a project which has been designed to address these four key issues, using at its core, a digital framework for the creation and management of key parameters related to the lifecycle performance of thermoplastic composite parts and structures. It aims to provide capability for the proposition, definition, evaluation and demonstration of advanced lightweight structures for new generation vehicles in the context of whole life performance parameters.
Solís-Dominguez, Fernando A; White, Scott A; Hutter, Travis Borrillo; Amistadi, Mary Kay; Root, Robert A; Chorover, Jon; Maier, Raina M
2012-01-17
Phytostabilization of mine tailings acts to mitigate both eolian dispersion and water erosion events which can disseminate barren tailings over large distances. This technology uses plants to establish a vegetative cover to permanently immobilize contaminants in the rooting zone, often requiring addition of an amendment to assist plant growth. Here we report the results of a greenhouse study that evaluated the ability of six native plant species to grow in extremely acidic (pH ∼ 2.5) metalliferous (As, Pb, Zn: 2000-3000 mg kg(-1)) mine tailings from Iron King Mine Humboldt Smelter Superfund site when amended with a range of compost concentrations. Results revealed that three of the six plant species tested (buffalo grass, mesquite, and catclaw acacia) are good candidates for phytostabilization at an optimum level of 15% compost (w/w) amendment showing good growth and minimal shoot accumulation of metal(loid)s. A fourth candidate, quailbush, also met all criteria except for exceeding the domestic animal toxicity limit for shoot accumulation of zinc. A key finding of this study was that the plant species that grew most successfully on these tailings significantly influenced key tailings parameters; direct correlations between plant biomass and both increased tailings pH and neutrophilic heterotrophic bacterial counts were observed. We also observed decreased iron oxidizer counts and decreased bioavailability of metal(loid)s mainly as a result of compost amendment. Taken together, these results suggest that the phytostabilization process reduced tailings toxicity as well as the potential for metal(loid) mobilization. This study provides practical information on plant and tailings characteristics that is critically needed for successful implementation of assisted phytostabilization on acidic, metalliferous mine tailings sites.
Solís-Dominguez, Fernando A.; White, Scott A.; Hutter, Travis Borrillo; Amistadi, Mary Kay; Root, Robert A.; Chorover, Jon; Maier, Raina M.
2012-01-01
Phytostabilization of mine tailings acts to mitigate both eolian dispersion and water erosion events which can disseminate barren tailings over large distances. This technology uses plants to establish a vegetative cover to permanently immobilize contaminants in the rooting zone, often requiring addition of an amendment to assist plant growth. Here we report the results of a greenhouse study that evaluated the ability of six native plant species to grow in extremely acidic (pH ~ 2.5) metalliferous (As, Pb, Zn: 2000–3000 mg kg−1) mine tailings from Iron King Mine Humboldt Smelter Superfund site when amended with a range of compost concentrations. Results revealed that three of the six plant species tested (buffalo grass, mesquite, and catclaw acacia) are good candidates for phytostabilization at an optimum level of 15% compost (w/w) amendment showing good growth and minimal shoot accumulation of metal(loid)s. A fourth candidate, quailbush, also met all criteria except for exceeding the domestic animal toxicity limit for shoot accumulation of zinc. A key finding of this study was that the plant species that grew most successfully on these tailings significantly influenced key tailings parameters; direct correlations between plant biomass and both increased tailings pH and neutrophilic heterotrophic bacterial counts were observed. We also observed decreased iron oxidizer counts and decreased bioavailability of metal(loid)s mainly as a result of compost amendment. Taken together, these results suggest that the phytostabilization process reduced tailings toxicity as well as the potential for metal(loid) mobilization. This study provides practical information on plant and tailings characteristics that is critically needed for successful implementation of assisted phytostabilization on acidic, metalliferous mine tailings sites. PMID:22191663
Core Problem: Does the CV Parent Body Magnetization require differentiation?
NASA Astrophysics Data System (ADS)
O'Brien, T.; Tarduno, J. A.; Smirnov, A. V.
2016-12-01
Evidence for the presence of past dynamos from magnetic studies of meteorites can provide key information on the nature and evolution of parent bodies. However, the suggestion of a past core dynamo for the CV parent body based on the study of the Allende meteorite has led to a paradox: a core dynamo requires differentiation, evidence for which is missing in the meteorite record. The key parameter used to distinguish core dynamo versus external field mechanisms is absolute field paleointensity, with high values (>>1 μT) favoring the former. Here we explore the fundamental requirements for absolute field intensity measurement in the Allende meteorite: single domain grains that are non-interacting. Magnetic hysteresis and directional data define strong magnetic interactions, negating a standard interpretation of paleointensity measurements in terms of absolute paleofield values. The Allende low field magnetic susceptibility is dominated by magnetite and FeNi grains, whereas the magnetic remanence is carried by an iron sulfide whose remanence-carrying capacity increases with laboratory cycling at constant field values, indicating reordering. The iron sulfide and FeNi grains are in close proximity, providing mineralogical context for interactions. We interpret the magnetization of Allende to record the intense early solar wind with metal-sulfide interactions amplifying the field, giving the false impression of a higher field value in some prior studies. An undifferentiated CV parent body is thus compatible with Allende's magnetization. Early solar wind magnetization should be the null hypothesis for evaluating the source of magnetization for chondrites and other meteorites.
Abraham, Sushil; Bain, David; Bowers, John; Larivee, Victor; Leira, Francisco; Xie, Jasmina
2015-01-01
The technology transfer of biological products is a complex process requiring control of multiple unit operations and parameters to ensure product quality and process performance. To achieve product commercialization, the technology transfer sending unit must successfully transfer knowledge about both the product and the process to the receiving unit. A key strategy for maximizing successful scale-up and transfer efforts is the effective use of engineering and shake-down runs to confirm operational performance and product quality prior to embarking on good manufacturing practice runs such as process performance qualification runs. We consider key factors to consider in making the decision to perform shake-down or engineering runs. We also present industry benchmarking results of how engineering runs are used in drug substance technology transfers alongside the main themes and best practices that have emerged. Our goal is to provide companies with a framework for ensuring the "right first time" technology transfers with effective deployment of resources within increasingly aggressive timeline constraints. © PDA, Inc. 2015.
Stiff, light, strong and ductile: nano-structured High Modulus Steel.
Springer, H; Baron, C; Szczepaniak, A; Uhlenwinkel, V; Raabe, D
2017-06-05
Structural material development for lightweight applications aims at improving the key parameters strength, stiffness and ductility at low density, but these properties are typically mutually exclusive. Here we present how we overcome this trade-off with a new class of nano-structured steel - TiB 2 composites synthesised in-situ via bulk metallurgical spray-forming. Owing to the nano-sized dispersion of the TiB 2 particles of extreme stiffness and low density - obtained by the in-situ formation with rapid solidification kinetics - the new material has the mechanical performance of advanced high strength steels, and a 25% higher stiffness/density ratio than any of the currently used high strength steels, aluminium, magnesium and titanium alloys. This renders this High Modulus Steel the first density-reduced, high stiffness, high strength and yet ductile material which can be produced on an industrial scale. Also ideally suited for 3D printing technology, this material addresses all key requirements for high performance and cost effective lightweight design.
An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform
NASA Astrophysics Data System (ADS)
Khurana, Mehak; Singh, Hukum
2017-09-01
To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.
Fourier-Mellin moment-based intertwining map for image encryption
NASA Astrophysics Data System (ADS)
Kaur, Manjit; Kumar, Vijay
2018-03-01
In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.
NASA Astrophysics Data System (ADS)
Menichetti, Roberto; Kanekal, Kiran H.; Kremer, Kurt; Bereau, Tristan
2017-09-01
The partitioning of small molecules in cell membranes—a key parameter for pharmaceutical applications—typically relies on experimentally available bulk partitioning coefficients. Computer simulations provide a structural resolution of the insertion thermodynamics via the potential of mean force but require significant sampling at the atomistic level. Here, we introduce high-throughput coarse-grained molecular dynamics simulations to screen thermodynamic properties. This application of physics-based models in a large-scale study of small molecules establishes linear relationships between partitioning coefficients and key features of the potential of mean force. This allows us to predict the structure of the insertion from bulk experimental measurements for more than 400 000 compounds. The potential of mean force hereby becomes an easily accessible quantity—already recognized for its high predictability of certain properties, e.g., passive permeation. Further, we demonstrate how coarse graining helps reduce the size of chemical space, enabling a hierarchical approach to screening small molecules.
A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism
NASA Astrophysics Data System (ADS)
Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo
2015-03-01
In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.
Secure Service Invocation in a Peer-to-Peer Environment Using JXTA-SOAP
NASA Astrophysics Data System (ADS)
Laghi, Maria Chiara; Amoretti, Michele; Conte, Gianni
The effective convergence of service-oriented architectures (SOA) and peer-to-peer (P2P) is an urgent task, with many important applications ranging from e-business to ambient intelligence. A considerable standardization effort is being carried out from both SOA and P2P communities, but a complete platform for the development of secure, distributed applications is still missing. In this context, the result of our research and development activity is JXTA-SOAP, an official extension for JXTA enabling Web Service sharing in peer-to-peer networks. Recently we focused on security aspects, providing JXTA-SOAP with a general security management system, and specialized policies that target both J2SE and J2ME versions of the component. Among others, we implemented a policy based on Multimedia Internet KEYing (MIKEY), which can be used to create a key pair and all the required parameters for encryption and decryption of service messages in consumer and provider peers running on resource-constrained devices.
NASA Technical Reports Server (NTRS)
Cross, Cynthia D.; Lewis, John F.; Barido, Richard A.; Carrasquillo, Robyn; Rains, George E.
2011-01-01
Recent changes in the overall NASA vision has resulted in further cost and schedule challenges for the Orion program. As a result, additional scrutiny has been focused on the use of new developments for hardware in the environmental control and life support systems. This paper will examine the Orion architecture as it is envisioned to support missions to the International Space Station and future exploration missions and determine what if any functions can be satisfied through the use of existing, heritage hardware designs. An initial evaluation of each component is included and where a heritage component was deemed likely further details are examined. Key technical parameters, mass, volume and vibration loads are a few of the specific items that are evaluated. Where heritage hardware has been identified that may be substituted in the Orion architecture a discussion of key requirement changes that may need to be made as well as recommendation to further evaluate applicability are noted.
NASA Astrophysics Data System (ADS)
Wang, Yi; Zhang, Ao; Ma, Jing
2017-07-01
Minimum-shift keying (MSK) has the advantages of constant envelope, continuous phase, and high spectral efficiency, and it is applied in radio communication and optical fiber communication. MSK modulation of coherent detection is proposed in the ground-to-satellite laser communication system; in addition, considering the inherent noise of uplink, such as intensity scintillation and beam wander, the communication performance of the MSK modulation system with coherent detection is studied in the uplink ground-to-satellite laser. Based on the gamma-gamma channel model, the closed form of bit error rate (BER) of MSK modulation with coherent detection is derived. In weak, medium, and strong turbulence, the BER performance of the MSK modulation system is simulated and analyzed. To meet the requirements of the ground-to-satellite coherent MSK system to optimize the parameters and configuration of the transmitter and receiver, the influence of the beam divergence angle, the zenith angle, the transmitter beam radius, and the receiver diameter are studied.
El Allaki, Farouk; Harrington, Noel; Howden, Krista
2016-11-01
The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Review of Concrete Biodeterioration in Relation to Buried Nuclear Waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turick, C; Berry, C.
Long-term storage of low level radioactive material in below ground concrete disposal units (DUs) (Saltstone Disposal Facility) is a means of depositing wastes generated from nuclear operations of the U.S. Department of Energy. Based on the currently modeled degradation mechanisms, possible microbial induced effects on the structural integrity of buried low level wastes must be addressed. Previous international efforts related to microbial impacts on concrete structures that house low level radioactive waste showed that microbial activity can play a significant role in the process of concrete degradation and ultimately structural deterioration. This literature review examines the recent research in thismore » field and is focused on specific parameters that are applicable to modeling and prediction of the fate of concrete vaults housing stored wastes and the wastes themselves. Rates of concrete biodegradation vary with the environmental conditions, illustrating a need to understand the bioavailability of key compounds involved in microbial activity. Specific parameters require pH and osmotic pressure to be within a certain range to allow for microbial growth as well as the availability and abundance of energy sources like components involved in sulfur, iron and nitrogen oxidation. Carbon flow and availability are also factors to consider in predicting concrete biodegradation. The results of this review suggest that microbial activity in Saltstone, (grouted low level radioactive waste) is unlikely due to very high pH and osmotic pressure. Biodegradation of the concrete vaults housing the radioactive waste however, is a possibility. The rate and degree of concrete biodegradation is dependent on numerous physical, chemical and biological parameters. Results from this review point to parameters to focus on for modeling activities and also, possible options for mitigation that would minimize concrete biodegradation. In addition, key chemical components that drive microbial activity on concrete surfaces are discussed.« less
Operation and design selection of high temperature superconducting magnetic bearings
NASA Astrophysics Data System (ADS)
Werfel, F. N.; Floegel-Delor, U.; Riedel, T.; Rothfeld, R.; Wippich, D.; Goebel, B.
2004-10-01
Axial and radial high temperature superconducting (HTS) magnetic bearings are evaluated by their parameters. Journal bearings possess advantages over thrust bearings. High magnetic gradients in a multi-pole permanent magnet (PM) configuration, the surrounding melt textured YBCO stator and adequate designs are the key features for increasing the overall bearing stiffness. The gap distance between rotor and stator determines the specific forces and has a strong impact on the PM rotor design. We report on the designing, building and measuring of a 200 mm prototype 100 kg HTS bearing with an encapsulated and thermally insulated melt textured YBCO ring stator. The encapsulation requires a magnetically large-gap (4-5 mm) operation but reduces the cryogenic effort substantially. The bearing requires 3 l of LN2 for cooling down, and about 0.2 l LN2 h-1 under operation. This is a dramatic improvement of the efficiency and in the practical usage of HTS magnetic bearings.
Use of a genetic algorithm to improve the rail profile on Stockholm underground
NASA Astrophysics Data System (ADS)
Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon
2010-12-01
In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.
Fast emulation of track reconstruction in the CMS simulation
NASA Astrophysics Data System (ADS)
Komm, Matthias; CMS Collaboration
2017-10-01
Simulated samples of various physics processes are a key ingredient within analyses to unlock the physics behind LHC collision data. Samples with more and more statistics are required to keep up with the increasing amounts of recorded data. During sample generation, significant computing time is spent on the reconstruction of charged particle tracks from energy deposits which additionally scales with the pileup conditions. In CMS, the FastSimulation package is developed for providing a fast alternative to the standard simulation and reconstruction workflow. It employs various techniques to emulate track reconstruction effects in particle collision events. Several analysis groups in CMS are utilizing the package, in particular those requiring many samples to scan the parameter space of physics models (e.g. SUSY) or for the purpose of estimating systematic uncertainties. The strategies for and recent developments in this emulation are presented, including a novel, flexible implementation of tracking emulation while retaining a sufficient, tuneable accuracy.
Mascharak, Shamik; Benitez, Patrick L; Proctor, Amy C; Madl, Christopher M; Hu, Kenneth H; Dewi, Ruby E; Butte, Manish J; Heilshorn, Sarah C
2017-01-01
Native vascular extracellular matrices (vECM) consist of elastic fibers that impart varied topographical properties, yet most in vitro models designed to study the effects of topography on cell behavior are not representative of native architecture. Here, we engineer an electrospun elastin-like protein (ELP) system with independently tunable, vECM-mimetic topography and demonstrate that increasing topographical variation causes loss of endothelial cell-cell junction organization. This loss of VE-cadherin signaling and increased cytoskeletal contractility on more topographically varied ELP substrates in turn promote YAP activation and nuclear translocation, resulting in significantly increased endothelial cell migration and proliferation. Our findings identify YAP as a required signaling factor through which fibrous substrate topography influences cell behavior and highlights topography as a key design parameter for engineered biomaterials. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aerodynamic design of electric and hybrid vehicles: A guidebook
NASA Technical Reports Server (NTRS)
Kurtz, D. W.
1980-01-01
A typical present-day subcompact electric hybrid vehicle (EHV), operating on an SAE J227a D driving cycle, consumes up to 35% of its road energy requirement overcoming aerodynamic resistance. The application of an integrated system design approach, where drag reduction is an important design parameter, can increase the cycle range by more than 15%. This guidebook highlights a logic strategy for including aerodynamic drag reduction in the design of electric and hybrid vehicles to the degree appropriate to the mission requirements. Backup information and procedures are included in order to implement the strategy. Elements of the procedure are based on extensive wind tunnel tests involving generic subscale models and full-scale prototype EHVs. The user need not have any previous aerodynamic background. By necessity, the procedure utilizes many generic approximations and assumptions resulting in various levels of uncertainty. Dealing with these uncertainties, however, is a key feature of the strategy.
Performance and cost of materials for lithium-based rechargeable automotive batteries
NASA Astrophysics Data System (ADS)
Schmuch, Richard; Wagner, Ralf; Hörpel, Gerhard; Placke, Tobias; Winter, Martin
2018-04-01
It is widely accepted that for electric vehicles to be accepted by consumers and to achieve wide market penetration, ranges of at least 500 km at an affordable cost are required. Therefore, significant improvements to lithium-ion batteries (LIBs) in terms of energy density and cost along the battery value chain are required, while other key performance indicators, such as lifetime, safety, fast-charging ability and low-temperature performance, need to be enhanced or at least sustained. Here, we review advances and challenges in LIB materials for automotive applications, in particular with respect to cost and performance parameters. The production processes of anode and cathode materials are discussed, focusing on material abundance and cost. Advantages and challenges of different types of electrolyte for automotive batteries are examined. Finally, energy densities and costs of promising battery chemistries are critically evaluated along with an assessment of the potential to fulfil the ambitious targets of electric vehicle propulsion.
CIM's bridge from CADD to CAM: Data management requirements for manufacturing engineering
NASA Technical Reports Server (NTRS)
Ford, S. J.
1984-01-01
Manufacturing engineering represents the crossroads of technical data management in a Computer Integrated Manufacturing (CIM) environment. Process planning, numerical control programming and tool design are the key functions which translate information from as engineered to as assembled. In order to transition data from engineering to manufacturing, it is necessary to introduce a series of product interpretations which contain an interim introduction of technical parameters. The current automation of the product definition and the production process places manufacturing engineering in the center of CAD/CAM with the responsibility of communicating design data to the factory floor via a manufacturing model of the data. A close look at data management requirements for manufacturing engineering is necessary in order to establish the overall specifications for CADD output, CAM input, and CIM integration. The functions and issues associated with the orderly evolution of computer aided engineering and manufacturing are examined.
Comparison of attrition test methods: ASTM standard fluidized bed vs jet cup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, R.; Goodwin, J.G. Jr.; Jothimurugesan, K.
2000-05-01
Attrition resistance is one of the key design parameters for catalysts used in fluidized-bed and slurry phase types of reactors. The ASTM fluidized-bed test has been one of the most commonly used attrition resistance evaluation methods; however, it requires the use of 50 g samples--a large amount for catalyst development studies. Recently a test using the jet cup requiring only 5 g samples has been proposed. In the present study, two series of spray-dried iron catalysts were evaluated using both the ASTM fluidized-bed test and a test based on the jet cup to determine this comparability. It is shown thatmore » the two tests give comparable results. This paper, by reporting a comparison of the jet-cup test with the ASTM standard, provides a basis for utilizing the more efficient jet cup with confidence in catalyst attrition studies.« less
Wu, Alex Chi; Morell, Matthew K.; Gilbert, Robert G.
2013-01-01
A core set of genes involved in starch synthesis has been defined by genetic studies, but the complexity of starch biosynthesis has frustrated attempts to elucidate the precise functional roles of the enzymes encoded. The chain-length distribution (CLD) of amylopectin in cereal endosperm is modeled here on the basis that the CLD is produced by concerted actions of three enzyme types: starch synthases, branching and debranching enzymes, including their respective isoforms. The model, together with fitting to experiment, provides four key insights. (1) To generate crystalline starch, defined restrictions on particular ratios of enzymatic activities apply. (2) An independent confirmation of the conclusion, previously reached solely from genetic studies, of the absolute requirement for debranching enzyme in crystalline amylopectin synthesis. (3) The model provides a mechanistic basis for understanding how successive arrays of crystalline lamellae are formed, based on the identification of two independent types of long amylopectin chains, one type remaining in the amorphous lamella, while the other propagates into, and is integral to the formation of, an adjacent crystalline lamella. (4) The model provides a means by which a small number of key parameters defining the core enzymatic activities can be derived from the amylopectin CLD, providing the basis for focusing studies on the enzymatic requirements for generating starches of a particular structure. The modeling approach provides both a new tool to accelerate efforts to understand granular starch biosynthesis and a basis for focusing efforts to manipulate starch structure and functionality using a series of testable predictions based on a robust mechanistic framework. PMID:23762422
Elisa technology consolidation study overview
NASA Astrophysics Data System (ADS)
Fitzsimons, E. D.; Brandt, N.; Johann, U.; Kemble, S.; Schulte, H.-R.; Weise, D.; Ziegler, T.
2017-11-01
The eLISA (evolved Laser Interferometer Space Antenna) mission is an ESA L3 concept mission intended to detect and characterise gravitational radiation emitted from astrophysical sources [1]. Current designs for eLISA [2] are based on the ESA study conducted in 2011 to reformulate the original ESA/NASA LISA concept [3] into an ESA-only L1 candidate named NGO (New Gravitational Observatory) [4]. During this brief reformulation period, a number of significant changes were made to the baseline LISA design in order to create a more costeffective mission. Some of the key changes implemented during this reformulation were: • A reduction in the inter satellite distance (the arm length) from 5 Gm to 1 Gm. • A reduction in the diameter of the telescope from 40 cm to 20 cm. • A reduction in the required laser power by approximately 40%. • Implementation of only 2 laser arms instead of 3. Many further simplifications were then enabled by these main design changes including the elimination of payload items in the two spacecraft (S/C) with no laser-link between them (the daughter S/C), a reduction in the size and complexity of the optical bench and the elimination of the Point Ahead Angle Mechanism (PAAM), which corrects for variations in the pointing direction to the far S/C caused by orbital dynamics [4] [5]. In the run-up to an L3 mission definition phase later in the decade, it is desirable to review these design choices and analyse the inter-dependencies and scaling between the key mission parameters with the goal of better understanding the parameter space and ensuring that in the final selection of the eLISA mission parameters the optimal balance between cost, complexity and science return can be achieved.
El Toro Library Solar Heating and Cooling Demonstration Project. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report is divided into a number of essentially independent sections, each of which covers a specific topic. The sections, and the topics covered, are as follows. Section 1 provides a brief summary description of the solar energy heating and cooling system including the key final design parameters. Section 2 contains a copy of the final Acceptance Test Report. Section 3 consists of a reduced set of final updated as-built mechanical, electrical, control and instrumentations drawings of the solar energy heating and cooling system. Section 4 provides a summary of system maintenance requirements, in the form of a maintenance schedulemore » which lists necessary maintenance tasks to be performed at monthly, quarterly, semi-annual, and annual intervals. Section 5 contains a series of photographs of the final solar energy system installation, including the collector field and the mechanical equipment room. Section 6 provides a concise summary of system operation and performance for the period of December 1981 through June 1982, as measured, computed and reported by Vitro Laboratories Division of Automation Industries, Inc., for the DOE National Solar Data Network. Section 7 provides a summary of key as-built design parameters, compared with the corresponding original design concept parameters. Section 8 provides a description of a series of significant problems encountered during construction, start-up and check-out of the solar energy heating and cooling system, together with the method employed to solve the problem at the time and/or recommendations for avoiding the problem in the future design of similar systems. Appendices A through H contain the installation, operation and maintenance submittals of the various manufacturers on the major items of equipment in the system. Reference CAPE-2823.« less
Site Characterization at a Tidal Energy Site in the East River, NY (usa)
NASA Astrophysics Data System (ADS)
Gunawan, B.; Neary, V. S.; Colby, J.
2012-12-01
A comprehensive tidal energy site characterization is performed using ADV measurements of instantaneous horizontal current magnitude and direction at the planned hub centerline of a tidal turbine over a two month period, and contributes to the growing data base of tidal energy site hydrodynamic conditions. The temporal variation, mean current statistics, and turbulence of the key tidal hydrodynamic parameters are examined in detail, and compared to estimates from two tidal energy sites in Puget Sound. Tidal hydrodynamic conditions, including mean annual current (at hub height), the speed of extreme gusts (instantaneous horizontal currents acting normal to the rotor plane), and turbulence intensity (as proposed here, relative to a mean current of 2 m s-1) can vary greatly among tidal energy sites. Comparison of hydrodynamic conditions measured in the East River tidal straight in New York City with those reported for two tidal energy sites in Puget Sound indicate differences of mean annual current speeds, difference in the instantaneous current speeds of extreme gusts, and differences in turbulence intensities. Significant differences in these parameters among the tidal energy sites, and with the tidal resource assessment map, highlight the importance of conducting site resource characterization with ADV measurements at the machine scale. As with the wind industry, which adopted an International Electrotechnical Commission (IEC) wind class standard to aid in the selection of wind turbines for a particular site, it is recommended that the tidal energy industry adopt an appropriate standard for tidal current classes. Such a standard requires a comprehensive field campaign at multiple tidal energy sites that can identify the key hydrodynamic parameters for tidal current site classification, select a list of tidal energy sites that exhibit the range of hydrodynamic conditions that will be encountered, and adopt consistent measurement practices (standards) for site classification.
NASA Astrophysics Data System (ADS)
Corbin, A. E.; Timmermans, J.; Hauser, L.; Bodegom, P. V.; Soudzilovskaia, N. A.
2017-12-01
There is a growing demand for accurate land surface parameterization from remote sensing (RS) observations. This demand has not been satisfied, because most estimation schemes apply 1) a single-sensor single-scale approach, and 2) require specific key-variables to be `guessed'. This is because of the relevant observational information required to accurately retrieve parameters of interest. Consequently, many schemes assume specific variables to be constant or not present; subsequently leading to more uncertainty. In this aspect, the MULTIscale SENTINEL land surface information retrieval Platform (MULTIPLY) was created. MULTIPLY couples a variety of RS sources with Radiative Transfer Models (RTM) over varying spectral ranges using data-assimilation to estimate geophysical parameters. In addition, MULTIPLY also uses prior information about the land surface to constrain the retrieval problem. This research aims to improve the retrieval of plant biophysical parameters through the use of priors of biophysical parameters/plant traits. Of particular interest are traits (physical, morphological or chemical trait) affecting individual performance and fitness of species. Plant traits that are able to be retrieved via RS and with RTMs include traits such as leaf-pigments, leaf water, LAI, phenols, C/N, etc. In-situ data for plant traits that are retrievable via RS techniques were collected for a meta-analysis from databases such as TRY, Ecosis, and individual collaborators. Of particular interest are the following traits: chlorophyll, carotenoids, anthocyanins, phenols, leaf water, and LAI. ANOVA statistics were generated for each traits according to species, plant functional groups (such as evergreens, grasses, etc.), and the trait itself. Afterwards, traits were also compared using covariance matrices. Using these as priors, MULTIPLY was is used to retrieve several plant traits in two validation sites in the Netherlands (Speulderbos) and in Finland (Sodankylä). Initial comparisons show significant improved results over non-a priori based retrievals.
Computational design analysis for deployment of cardiovascular stents
NASA Astrophysics Data System (ADS)
Tammareddi, Sriram; Sun, Guangyong; Li, Qing
2010-06-01
Cardiovascular disease has become a major global healthcare problem. As one of the relatively new medical devices, stents offer a minimally-invasive surgical strategy to improve the quality of life for numerous cardiovascular disease patients. One of the key associative issues has been to understand the effect of stent structures on its deployment behaviour. This paper aims to develop a computational model for exploring the biomechanical responses to the change in stent geometrical parameters, namely the strut thickness and cross-link width of the Palmaz-Schatz stent. Explicit 3D dynamic finite element analysis was carried out to explore the sensitivity of these geometrical parameters on deployment performance, such as dog-boning, fore-shortening, and stent deformation over the load cycle. It has been found that an increase in stent thickness causes a sizeable rise in the load required to deform the stent to its target diameter, whilst reducing maximum dog-boning in the stent. An increase in the cross-link width showed that no change in the load is required to deform the stent to its target diameter, and there is no apparent correlation with dog-boning but an increased fore-shortening with increasing cross-link width. The computational modelling and analysis presented herein proves an effective way to refine or optimise the design of stent structures.
System implications of aperture-shade design for the SIRTF Observatory
NASA Technical Reports Server (NTRS)
Lee, J. H.; Brooks, W. F.; Maa, S.
1987-01-01
The 1-m-aperture Space Infrared Telescope Facility (SIRTF) will operate with a sensitivity limited only by the zodiacal background. This sensitivity requirement places severe restrictions on the amount of stray light which can reach the focal plane from off-axis sources such as the sun or earth limb. In addition, radiation from these sources can degrade the lifetime of the telescope and instrument cryogenic system which is now planned for two years before the first servicing. Since the aperture of the telescope represents a break in the telescope insulation system and is effectively the first element in the optical train, the aperture shade is a key system component. The mass, length, and temperature of the shade should be minimized to reduce system cost while maximizing the telescope lifetime and stray light performance. The independent geometric parameters that characterize an asymmetrical shade for a 600 km, 28 deg orbit were identified, and the system sensitivity to the three most important shade parameters were explored. Despite the higher heat loads compared to previously studied polar orbit missions, the analysis determined that passive radiators of a reasonable size are sufficient to meet the system requirements. An optimized design for the SIRTF mission, based on the sensitivity analysis, is proposed.
Digitization of medical documents: an X-Windows application for fast scanning.
Muñoz, A; Salvador, C H; Gonzalez, M A; Dueñas, A
1992-01-01
This paper deals with digitization, using a commercial scanner, of medical documents as still images for introduction into a computer-based Information System. Document management involves storing, editing and transmission. This task has usually been approached from the perspective of the difficulties posed by radiologic images because of their indisputable qualitative and quantitative significance. However, healthcare activities require the management of many other types of documents and involve the requirements of numerous users. One key to document management will be the availability of a digitizer to deal with the greatest possible number of different types of documents. This paper describes the relevant aspects of documents and the technical specifications that digitizers must fulfill. The concept of document type is introduced as the ideal set of digitizing parameters for a given document. The use of document type parameters can drastically reduce the time the user spends in scanning sessions. Presentation is made of an application based on Unix, X-Windows and OSF/Motif, with a GPIB interface, implemented around the document type concept. Finally, the results of the evaluation of the application are presented, focusing on the user interface, as well as on the viewing of color images in an X-Windows environment and the use of lossy algorithms in the compression of medical images.
Thales SESO's hollow and massive corner cube solutions
NASA Astrophysics Data System (ADS)
Fappani, Denis; Dahan, Déborah; Costes, Vincent; Luitot, Clément
2017-11-01
For Space Activities, more and more Corner Cubes, used as solution for retro reflection of light (telemetry and positioning), are emerging worldwide in different projects. Depending on the application, they can be massive or hollow Corner Cubes. For corners as well as for any kind of space optics, it usual that use of light/lightened components is always a baseline for purpose of mass reduction payloads. But other parameters, such as the system stability under severe environment, are also major issues, especially for the corner cube systems which require generally very tight angular accuracies. For the particular case of the hollow corner cube, an alternative solution to the usual cementing of the 3 reflective surfaces, has been developed with success in collaboration with CNES to guarantee a better stability and fulfill the weight requirements.. Another important parameter is the dihedral angles that have a great influence on the wavefront error. Two technologies can be considered, either a Corner Cubes array assembled in a very stable housing, or the irreversible adherence technology used for assembling the three parts of a cube. This latter technology enables in particular not having to use cement. The poster will point out the conceptual design, the manufacturing and control key-aspects of such corner cube assemblies as well as the technologies used for their assembling.
Stochastic Analysis of Orbital Lifetimes of Spacecraft
NASA Technical Reports Server (NTRS)
Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David
2008-01-01
A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.
Wang, Xuan; Tandeo, Pierre; Fablet, Ronan; Husson, Romain; Guan, Lei; Chen, Ge
2016-01-01
The swell propagation model built on geometric optics is known to work well when simulating radiated swells from a far located storm. Based on this simple approximation, satellites have acquired plenty of large samples on basin-traversing swells induced by fierce storms situated in mid-latitudes. How to routinely reconstruct swell fields with these irregularly sampled observations from space via known swell propagation principle requires more examination. In this study, we apply 3-h interval pseudo SAR observations in the ensemble Kalman filter (EnKF) to reconstruct a swell field in ocean basin, and compare it with buoy swell partitions and polynomial regression results. As validated against in situ measurements, EnKF works well in terms of spatial–temporal consistency in far-field swell propagation scenarios. Using this framework, we further address the influence of EnKF parameters, and perform a sensitivity analysis to evaluate estimations made under different sets of parameters. Such analysis is of key interest with respect to future multiple-source routinely recorded swell field data. Satellite-derived swell data can serve as a valuable complementary dataset to in situ or wave re-analysis datasets. PMID:27898005
Automatic approach to deriving fuzzy slope positions
NASA Astrophysics Data System (ADS)
Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi
2018-03-01
Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.
Model predictions of ocular injury from 1315-nm laser light
NASA Astrophysics Data System (ADS)
Polhamus, Garrett D.; Zuclich, Joseph A.; Cain, Clarence P.; Thomas, Robert J.; Foltz, Michael
2003-06-01
With the advent of future weapons systems that employ high energy lasers, the 1315 nm wavelength will present a new laser safety hazard to the armed forces. Experiments in non-human primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular and retinal lesions, as a function of pulse duration and spot size at the cornea. To improve our understanding of this phenomena, there is a need for a mathematical model that properly predicts these injuries and their dependence on appropriate exposure parameters. This paper describes the use of a finite difference model of laser thermal injury in the cornea and retina. The model was originally developed for use with shorter wavelength laser irradiation, and as such, requires estimation of several key parameters used in the computations. The predictions from the model are compared to the experimental data, and conclusions are drawn regarding the ability of the model to properly follow the published observations at this wavelength.
Comparing modelling techniques when designing VPH gratings for BigBOSS
NASA Astrophysics Data System (ADS)
Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James
2012-09-01
BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5<=z<=1.6 in addition to several hundred thousand QSOs at 0.5<=z<=3.5. When designing BigBOSS instrumentation, it is imperative to maximize throughput whilst maintaining a resolving power of between R=1500 and 4000 over a wavelength range of 360-980 nm. Volume phase Holographic (VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.
NASA Technical Reports Server (NTRS)
Selcuk, M. K.; Fujita, T.
1984-01-01
A simple graphical method was developed to undertake technical design trade-off studies for individual parabolic dish models comprising a two-axis tracking parabolic dish with a cavity receiver and power conversion assembly at the focal point. The results of these technical studies are then used in performing the techno-economic analyses required for determining appropriate subsystem sizing. Selected graphs that characterize the performance of subsystems within the module were arranged in the form of a nomogram that would enable an investigator to carry out several design trade-off studies. Key performance parameters encompassed in the nomogram include receiver losses, intercept factor, engine rating, and engine efficiency. Design and operation parameters such as concentrator size, receiver type (open or windowed aperture), receiver aperture size, operating temperature of the receiver and engine, engine partial load characteristics, concentrator slope error, and the type of reflector surface, are also included in the graphical solution. Cost considerations are not included.
Coen, Enrico; Rolland-Lagan, Anne-Gaëlle; Matthews, Mark; Bangham, J. Andrew; Prusinkiewicz, Przemyslaw
2004-01-01
Although much progress has been made in understanding how gene expression patterns are established during development, much less is known about how these patterns are related to the growth of biological shapes. Here we describe conceptual and experimental approaches to bridging this gap, with particular reference to plant development where lack of cell movement simplifies matters. Growth and shape change in plants can be fully described with four types of regional parameter: growth rate, anisotropy, direction, and rotation. A key requirement is to understand how these parameters both influence and respond to the action of genes. This can be addressed by using mechanistic models that capture interactions among three components: regional identities, regionalizing morphogens, and polarizing morphogens. By incorporating these interactions within a growing framework, it is possible to generate shape changes and associated gene expression patterns according to particular hypotheses. The results can be compared with experimental observations of growth of normal and mutant forms, allowing further hypotheses and experiments to be formulated. We illustrate these principles with a study of snapdragon petal growth. PMID:14960734
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
Kokko, Marika; Epple, Stefanie; Gescher, Johannes; Kerzenmacher, Sven
2018-06-01
Over the last decade, there has been an ever-growing interest in bioelectrochemical systems (BES) as a sustainable technology enabling simultaneous wastewater treatment and biological production of, e.g. electricity, hydrogen, and further commodities. A key component of any BES degrading organic matter is the anode where electric current is biologically generated from the oxidation of organic compounds. The performance of BES depends on the interactions of the anodic microbial communities. To optimize the operational parameters and process design of BES a better comprehension of the microbial community dynamics and interactions at the anode is required. This paper reviews the abundance of different microorganisms in anodic biofilms and discusses their roles and possible side reactions with respect to their implications on the performance of BES utilizing wastewaters. The most important operational parameters affecting anodic microbial communities grown with wastewaters are highlighted and guidelines for controlling the composition of microbial communities are given. Copyright © 2018 Elsevier Ltd. All rights reserved.
Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.
Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha
2012-11-30
Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. Copyright © 2012 Elsevier B.V. All rights reserved.
Economics of Future Growth in Photovoltaics Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basore, Paul A.; Chung, Donald; Buonassisi, Tonio
2015-06-14
The past decade's record of growth in the photovoltaics manufacturing industry indicates that global investment in manufacturing capacity for photovoltaic modules tends to increase in proportion to the size of the industry. The slope of this proportionality determines how fast the industry will grow in the future. Two key parameters determine this slope. One is the annual global investment in manufacturing capacity normalized to the manufacturing capacity for the previous year (capacity-normalized capital investment rate, CapIR, units $/W). The other is how much capital investment is required for each watt of annual manufacturing capacity, normalized to the service life ofmore » the assets (capacity-normalized capital demand rate, CapDR, units $/W). If these two parameters remain unchanged from the values they have held for the past few years, global manufacturing capacity will peak in the next few years and then decline. However, it only takes a small improvement in CapIR to ensure future growth in photovoltaics. Any accompanying improvement in CapDR will accelerate that growth.« less
Khan, Faisal Nadeem; Zhong, Kangping; Zhou, Xian; Al-Arashi, Waled Hussein; Yu, Changyuan; Lu, Chao; Lau, Alan Pak Tao
2017-07-24
We experimentally demonstrate the use of deep neural networks (DNNs) in combination with signals' amplitude histograms (AHs) for simultaneous optical signal-to-noise ratio (OSNR) monitoring and modulation format identification (MFI) in digital coherent receivers. The proposed technique automatically extracts OSNR and modulation format dependent features of AHs, obtained after constant modulus algorithm (CMA) equalization, and exploits them for the joint estimation of these parameters. Experimental results for 112 Gbps polarization-multiplexed (PM) quadrature phase-shift keying (QPSK), 112 Gbps PM 16 quadrature amplitude modulation (16-QAM), and 240 Gbps PM 64-QAM signals demonstrate OSNR monitoring with mean estimation errors of 1.2 dB, 0.4 dB, and 1 dB, respectively. Similarly, the results for MFI show 100% identification accuracy for all three modulation formats. The proposed technique applies deep machine learning algorithms inside standard digital coherent receiver and does not require any additional hardware. Therefore, it is attractive for cost-effective multi-parameter estimation in next-generation elastic optical networks (EONs).
Methane storage in nanoporous material at supercritical temperature over a wide range of pressures
Wu, Keliu; Chen, Zhangxin; Li, Xiangfang; Dong, Xiaohu
2016-01-01
The methane storage behavior in nanoporous material is significantly different from that of a bulk phase, and has a fundamental role in methane extraction from shale and its storage for vehicular applications. Here we show that the behavior and mechanisms of the methane storage are mainly dominated by the ratio of the interaction between methane molecules and nanopores walls to the methane intermolecular interaction, and a geometric constraint. By linking the macroscopic properties of the methane storage to the microscopic properties of a system of methane molecules-nanopores walls, we develop an equation of state for methane at supercritical temperature over a wide range of pressures. Molecular dynamic simulation data demonstrates that this equation is able to relate very well the methane storage behavior with each of the key physical parameters, including a pore size and shape and wall chemistry and roughness. Moreover, this equation only requires one fitted parameter, and is simple, reliable and powerful in application. PMID:27628747
Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection
NASA Astrophysics Data System (ADS)
Brasche, L. J. H.; Lopez, R.; Eisenmann, D.
2006-03-01
Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
NASA Astrophysics Data System (ADS)
Ohtsuka, N.; Shindo, Y.; Makita, A.
2010-06-01
Instrumented Charpy test was conducted on small sized specimen of 21/4Cr-1Mo steel. In the test the single specimen key curve method was applied to determine the value of fracture toughness for the initiation of crack extension with hydrogen free, KIC, and for hydrogen embrittlement cracking, KIH. Also the tearing modulus as a parameter for resistance to crack extension was determined. The role of these parameters was discussed at an upper shelf temperature and at a transition temperature. Then the key curve method combined with instrumented Charpy test was proven to be used to evaluate not only temper embrittlement but also hydrogen embrittlement.
Device-independent secret-key-rate analysis for quantum repeaters
NASA Astrophysics Data System (ADS)
Holz, Timo; Kampermann, Hermann; Bruß, Dagmar
2018-01-01
The device-independent approach to quantum key distribution (QKD) aims to establish a secret key between two or more parties with untrusted devices, potentially under full control of a quantum adversary. The performance of a QKD protocol can be quantified by the secret key rate, which can be lower bounded via the violation of an appropriate Bell inequality in a setup with untrusted devices. We study secret key rates in the device-independent scenario for different quantum repeater setups and compare them to their device-dependent analogon. The quantum repeater setups under consideration are the original protocol by Briegel et al. [Phys. Rev. Lett. 81, 5932 (1998), 10.1103/PhysRevLett.81.5932] and the hybrid quantum repeater protocol by van Loock et al. [Phys. Rev. Lett. 96, 240501 (2006), 10.1103/PhysRevLett.96.240501]. For a given repeater scheme and a given QKD protocol, the secret key rate depends on a variety of parameters, such as the gate quality or the detector efficiency. We systematically analyze the impact of these parameters and suggest optimized strategies.
Security of Color Image Data Designed by Public-Key Cryptosystem Associated with 2D-DWT
NASA Astrophysics Data System (ADS)
Mishra, D. C.; Sharma, R. K.; Kumar, Manish; Kumar, Kuldeep
2014-08-01
In present times the security of image data is a major issue. So, we have proposed a novel technique for security of color image data by public-key cryptosystem or asymmetric cryptosystem. In this technique, we have developed security of color image data using RSA (Rivest-Shamir-Adleman) cryptosystem with two-dimensional discrete wavelet transform (2D-DWT). Earlier proposed schemes for security of color images designed on the basis of keys, but this approach provides security of color images with the help of keys and correct arrangement of RSA parameters. If the attacker knows about exact keys, but has no information of exact arrangement of RSA parameters, then the original information cannot be recovered from the encrypted data. Computer simulation based on standard example is critically examining the behavior of the proposed technique. Security analysis and a detailed comparison between earlier developed schemes for security of color images and proposed technique are also mentioned for the robustness of the cryptosystem.
Numerical Modeling of Ophthalmic Response to Space
NASA Technical Reports Server (NTRS)
Nelson, E. S.; Myers, J. G.; Mulugeta, L.; Vera, J.; Raykin, J.; Feola, A.; Gleason, R.; Samuels, B.; Ethier, C. R.
2015-01-01
To investigate ophthalmic changes in spaceflight, we would like to predict the impact of blood dysregulation and elevated intracranial pressure (ICP) on Intraocular Pressure (IOP). Unlike other physiological systems, there are very few lumped parameter models of the eye. The eye model described here is novel in its inclusion of the human choroid and retrobulbar subarachnoid space (rSAS), which are key elements in investigating the impact of increased ICP and ocular blood volume. Some ingenuity was required in modeling the blood and rSAS compartments due to the lack of quantitative data on essential hydrodynamic quantities, such as net choroidal volume and blood flowrate, inlet and exit pressures, and material properties, such as compliances between compartments.
Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.
Ze Wang; Chi Man Wong; Feng Wan
2017-07-01
An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.
Uniqueness and characterization theorems for generalized entropies
NASA Astrophysics Data System (ADS)
Enciso, Alberto; Tempesta, Piergiulio
2017-12-01
The requirement that an entropy function be composable is key: it means that the entropy of a compound system can be calculated in terms of the entropy of its independent components. We prove that, under mild regularity assumptions, the only composable generalized entropy in trace form is the Tsallis one-parameter family (which contains Boltzmann-Gibbs as a particular case). This result leads to the use of generalized entropies that are not of trace form, such as Rényi’s entropy, in the study of complex systems. In this direction, we also present a characterization theorem for a large class of composable non-trace-form entropy functions with features akin to those of Rényi’s entropy.
Textile Technologies and Tissue Engineering: A Path Toward Organ Weaving.
Akbari, Mohsen; Tamayol, Ali; Bagherifard, Sara; Serex, Ludovic; Mostafalu, Pooria; Faramarzi, Negar; Mohammadi, Mohammad Hossein; Khademhosseini, Ali
2016-04-06
Textile technologies have recently attracted great attention as potential biofabrication tools for engineering tissue constructs. Using current textile technologies, fibrous structures can be designed and engineered to attain the required properties that are demanded by different tissue engineering applications. Several key parameters such as physiochemical characteristics of fibers, microarchitecture, and mechanical properties of the fabrics play important roles in the effective use of textile technologies in tissue engineering. This review summarizes the current advances in the manufacturing of biofunctional fibers. Different textile methods such as knitting, weaving, and braiding are discussed and their current applications in tissue engineering are highlighted. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Conduction band edge effective mass of La-doped BaSnO{sub 3}
DOE Office of Scientific and Technical Information (OSTI.GOV)
James Allen, S., E-mail: allen@itst.ucsb.edu; Law, Ka-Ming; Raghavan, Santosh
2016-06-20
BaSnO{sub 3} has attracted attention as a promising material for applications requiring wide band gap, high electron mobility semiconductors, and moreover possesses the same perovskite crystal structure as many functional oxides. A key parameter for these applications and for the interpretation of its properties is the conduction band effective mass. We measure the plasma frequency of La-doped BaSnO{sub 3} thin films by glancing incidence, parallel-polarized resonant reflectivity. Using the known optical dielectric constant and measured electron density, the resonant frequency determines the band edge electron mass to be 0.19 ± 0.01. The results allow for testing band structure calculations and transport models.
Heavy doping effects in high efficiency silicon solar cells
NASA Technical Reports Server (NTRS)
Lindholm, F. A.
1984-01-01
Several of the key parameters describing the heavily doped regions of silicon solar cells are examined. The experimentally determined energy gap narrowing and minority carrier diffusivity and mobility are key factors in the investigation.
Traveltime-based descriptions of transport and mixing in heterogeneous domains
NASA Astrophysics Data System (ADS)
Luo, Jian; Cirpka, Olaf A.
2008-09-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass transfer coefficients. In most applications, breakthrough curves (BTCs) of conservative and reactive compounds are measured at only a few locations and spatially explicit models are calibrated by matching these BTCs. A common difficulty in such applications is that the individual BTCs differ too strongly to justify the assumption of spatial homogeneity, whereas the number of observation points is too small to identify the spatial distribution of the decisive parameters. The key objective of the current study is to characterize physical transport by the analysis of conservative tracer BTCs and predict the macroscopic BTCs of compounds that react upon mixing from the interpretation of conservative tracer BTCs and reactive parameters determined in the laboratory. We do this in the framework of traveltime-based transport models which do not require spatially explicit, costly aquifer characterization. By considering BTCs of a conservative tracer measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the traveltime-based framework, the BTC of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct traveltime value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of traveltimes, which also determines the weights associated with each stream tube. Key issues in using the traveltime-based framework include the description of mixing mechanisms and the estimation of the traveltime distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the traveltime distribution, given a BTC integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases wherein the true traveltime distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and traveltime distributions to fit conservative BTCs and describe the tailing. A reactive transport case of a dual Michaelis-Menten problem demonstrates that the reactive mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local BTCs.
NASA Astrophysics Data System (ADS)
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Van der Tol, C.; Berry, J. A.
2014-12-01
Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
Local overfishing may be avoided by examining parameters of a spatio-temporal model
Shackell, Nancy; Mills Flemming, Joanna
2017-01-01
Spatial erosion of stock structure through local overfishing can lead to stock collapse because fish often prefer certain locations, and fisheries tend to focus on those locations. Fishery managers are challenged to maintain the integrity of the entire stock and require scientific approaches that provide them with sound advice. Here we propose a Bayesian hierarchical spatio-temporal modelling framework for fish abundance data to estimate key parameters that define spatial stock structure: persistence (similarity of spatial structure over time), connectivity (coherence of temporal pattern over space), and spatial variance (variation across the seascape). The consideration of these spatial parameters in the stock assessment process can help identify the erosion of structure and assist in preventing local overfishing. We use Atlantic cod (Gadus morhua) in eastern Canada as a case study an examine the behaviour of these parameters from the height of the fishery through its collapse. We identify clear signals in parameter behaviour under circumstances of destructive stock erosion as well as for recovery of spatial structure even when combined with a non-recovery in abundance. Further, our model reveals the spatial pattern of areas of high and low density persists over the 41 years of available data and identifies the remnant patches. Models of this sort are crucial to recovery plans if we are to identify and protect remaining sources of recolonization for Atlantic cod. Our method is immediately applicable to other exploited species. PMID:28886179
Local overfishing may be avoided by examining parameters of a spatio-temporal model.
Carson, Stuart; Shackell, Nancy; Mills Flemming, Joanna
2017-01-01
Spatial erosion of stock structure through local overfishing can lead to stock collapse because fish often prefer certain locations, and fisheries tend to focus on those locations. Fishery managers are challenged to maintain the integrity of the entire stock and require scientific approaches that provide them with sound advice. Here we propose a Bayesian hierarchical spatio-temporal modelling framework for fish abundance data to estimate key parameters that define spatial stock structure: persistence (similarity of spatial structure over time), connectivity (coherence of temporal pattern over space), and spatial variance (variation across the seascape). The consideration of these spatial parameters in the stock assessment process can help identify the erosion of structure and assist in preventing local overfishing. We use Atlantic cod (Gadus morhua) in eastern Canada as a case study an examine the behaviour of these parameters from the height of the fishery through its collapse. We identify clear signals in parameter behaviour under circumstances of destructive stock erosion as well as for recovery of spatial structure even when combined with a non-recovery in abundance. Further, our model reveals the spatial pattern of areas of high and low density persists over the 41 years of available data and identifies the remnant patches. Models of this sort are crucial to recovery plans if we are to identify and protect remaining sources of recolonization for Atlantic cod. Our method is immediately applicable to other exploited species.
NASA Astrophysics Data System (ADS)
Ramanan, Natarajan; Kozman, Austin; Sims, James B.
2000-06-01
As the lithography industry moves toward finer features, specifications on temperature uniformity of the bake plates are expected to become more stringent. Consequently, aggressive improvements are needed to conventional bake station designs to make them perform significantly better than current market requirements. To this end, we have conducted a rigorous study that combines state-of-the-art simulation tools and experimental methods to predict the impact of the parameters that influence the uniformity of the wafer in proximity bake. The key observation from this detailed study is that the temperature uniformity of the wafer in proximity mode depends on a number of parameters in addition to the uniformity of the bake plate itself. These parameters include the lid design, the air flow distribution around the bake chamber, bake plate design and flatness of the bake plate and wafer. By performing careful experimental studies that were guided by extensive numerical simulations, we were able to understand the relative importance of each of these parameters. In an orderly fashion, we made appropriate design changes to curtail or eliminate the nonuniformity caused by each of these parameters. After implementing all these changes, we have now been able to match or improve the temperature uniformity of the wafer in proximity with that of a contact measurement on the bake plate. The wafer temperature uniformity is also very close to the theoretically predicted uniformity of the wafer.
NASA Astrophysics Data System (ADS)
Salvucci, Guido D.; Gentine, Pierre
2013-04-01
The ability to predict terrestrial evapotranspiration (E) is limited by the complexity of rate-limiting pathways as water moves through the soil, vegetation (roots, xylem, stomata), canopy air space, and the atmospheric boundary layer. The impossibility of specifying the numerous parameters required to model this process in full spatial detail has necessitated spatially upscaled models that depend on effective parameters such as the surface vapor conductance (Csurf). Csurf accounts for the biophysical and hydrological effects on diffusion through the soil and vegetation substrate. This approach, however, requires either site-specific calibration of Csurf to measured E, or further parameterization based on metrics such as leaf area, senescence state, stomatal conductance, soil texture, soil moisture, and water table depth. Here, we show that this key, rate-limiting, parameter can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and E. The relation is that the vertical variance of the relative humidity profile is less than would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. It is found to hold over a wide range of climate conditions (arid-humid) and limiting factors (soil moisture, leaf area, energy). With this relation, estimates of E and Csurf can be obtained globally from widely available meteorological measurements, many of which have been archived since the early 1900s. In conjunction with precipitation and stream flow, long-term E estimates provide insights and empirical constraints on projected accelerations of the hydrologic cycle.
Salvucci, Guido D; Gentine, Pierre
2013-04-16
The ability to predict terrestrial evapotranspiration (E) is limited by the complexity of rate-limiting pathways as water moves through the soil, vegetation (roots, xylem, stomata), canopy air space, and the atmospheric boundary layer. The impossibility of specifying the numerous parameters required to model this process in full spatial detail has necessitated spatially upscaled models that depend on effective parameters such as the surface vapor conductance (C(surf)). C(surf) accounts for the biophysical and hydrological effects on diffusion through the soil and vegetation substrate. This approach, however, requires either site-specific calibration of C(surf) to measured E, or further parameterization based on metrics such as leaf area, senescence state, stomatal conductance, soil texture, soil moisture, and water table depth. Here, we show that this key, rate-limiting, parameter can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and E. The relation is that the vertical variance of the relative humidity profile is less than would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. It is found to hold over a wide range of climate conditions (arid-humid) and limiting factors (soil moisture, leaf area, energy). With this relation, estimates of E and C(surf) can be obtained globally from widely available meteorological measurements, many of which have been archived since the early 1900s. In conjunction with precipitation and stream flow, long-term E estimates provide insights and empirical constraints on projected accelerations of the hydrologic cycle.
Co-evolving prisoner's dilemma: Performance indicators and analytic approaches
NASA Astrophysics Data System (ADS)
Zhang, W.; Choi, C. W.; Li, Y. S.; Xu, C.; Hui, P. M.
2017-02-01
Understanding the intrinsic relation between the dynamical processes in a co-evolving network and the necessary ingredients in formulating a reliable theory is an important question and a challenging task. Using two slightly different definitions of performance indicator in the context of a co-evolving prisoner's dilemma game, it is shown that very different cooperative levels result and theories of different complexity are required to understand the key features. When the payoff per opponent is used as the indicator (Case A), non-cooperative strategy has an edge and dominates in a large part of the parameter space formed by the cutting-and-rewiring probability and the strategy imitation probability. When the payoff from all opponents is used (Case B), cooperative strategy has an edge and dominates the parameter space. Two distinct phases, one homogeneous and dynamical and another inhomogeneous and static, emerge and the phase boundary in the parameter space is studied in detail. A simple theory assuming an average competing environment for cooperative agents and another for non-cooperative agents is shown to perform well in Case A. The same theory, however, fails badly for Case B. It is necessary to include more spatial correlation into a theory for Case B. We show that the local configuration approximation, which takes into account of the different competing environments for agents with different strategies and degrees, is needed to give reliable results for Case B. The results illustrate that formulating a proper theory requires both a conceptual understanding of the effects of the adaptive processes in the problem and a delicate balance between simplicity and accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca
2011-11-28
This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less
Sweetapple, Christine; Fu, Guangtao; Butler, David
2013-09-01
This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg
2017-11-01
In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.
Recent advances in ultrafast-laser-based spectroscopy and imaging for reacting plasmas and flames
NASA Astrophysics Data System (ADS)
Patnaik, Anil K.; Adamovich, Igor; Gord, James R.; Roy, Sukesh
2017-10-01
Reacting flows and plasmas are prevalent in a wide array of systems involving defense, commercial, space, energy, medical, and consumer products. Understanding the complex physical and chemical processes involving reacting flows and plasmas requires measurements of key parameters, such as temperature, pressure, electric field, velocity, and number densities of chemical species. Time-resolved measurements of key chemical species and temperature are required to determine kinetics related to the chemical reactions and transient phenomena. Laser-based, noninvasive linear and nonlinear spectroscopic approaches have proved to be very valuable in providing key insights into the physico-chemical processes governing reacting flows and plasmas as well as validating numerical models. The advent of kilohertz rate amplified femtosecond lasers has expanded the multidimensional imaging of key atomic species such as H, O, and N in a significant way, providing unprecedented insight into preferential diffusion and production of these species under chemical reactions or electric-field driven processes. These lasers not only provide 2D imaging of chemical species but have the ability to perform measurements free of various interferences. Moreover, these lasers allow 1D and 2D temperature-field measurements, which were quite unimaginable only a few years ago. The rapid growth of the ultrafast-laser-based spectroscopic measurements has been fueled by the need to achieve the following when measurements are performed in reacting flows and plasmas. They are: (1) interference-free measurements (collision broadening, photolytic dissociation, Stark broadening, etc), (2) time-resolved single-shot measurements at a rate of 1-10 kHz, (3) spatially-resolved measurements, (4) higher dimensionality (line, planar, or volumetric), and (5) simultaneous detection of multiple species. The overarching goal of this article is to review the current state-of-the-art ultrafast-laser-based spectroscopic techniques and their remarkable development in the past two decades in meeting one or all of the above five goals for the spectroscopic measurement of temperature, number density of the atomic and molecular species, and electric field.
Essential amino acids: master regulators of nutrition and environmental footprint?
Tessari, Paolo; Lante, Anna; Mosca, Giuliano
2016-01-01
The environmental footprint of animal food production is considered several-fold greater than that of crops cultivation. Therefore, the choice between animal and vegetarian diets may have a relevant environmental impact. In such comparisons however, an often neglected issue is the nutritional value of foods. Previous estimates of nutrients’ environmental footprint had predominantly been based on either food raw weight or caloric content, not in respect to human requirements. Essential amino acids (EAAs) are key parameters in food quality assessment. We re-evaluated here the environmental footprint (expressed both as land use for production and as Green House Gas Emission (GHGE), of some animal and vegetal foods, titrated to provide EAAs amounts in respect to human requirements. Production of high-quality animal proteins, in amounts sufficient to match the Recommended Daily Allowances of all the EAAs, would require a land use and a GHGE approximately equal, greater o smaller (by only ±1-fold), than that necessary to produce vegetal proteins, except for soybeans, that exhibited the smallest footprint. This new analysis downsizes the common concept of a large advantage, in respect to environmental footprint, of crops vs. animal foods production, when human requirements of EAAs are used for reference. PMID:27221394
Edwards, Joel; Othman, Maazuza; Burn, Stewart; Crossin, Enda
2016-10-01
The collection of source separated kerbside municipal FW (SSFW) is being incentivised in Australia, however such a collection is likely to increase the fuel and time a collection truck fleet requires. Therefore, waste managers need to determine whether the incentives outweigh the cost. With literature scarcely describing the magnitude of increase, and local parameters playing a crucial role in accurately modelling kerbside collection; this paper develops a new general mathematical model that predicts the energy and time requirements of a collection regime whilst incorporating the unique variables of different jurisdictions. The model, Municipal solid waste collect (MSW-Collect), is validated and shown to be more accurate at predicting fuel consumption and trucks required than other common collection models. When predicting changes incurred for five different SSFW collection scenarios, results show that SSFW scenarios require an increase in fuel ranging from 1.38% to 57.59%. There is also a need for additional trucks across most SSFW scenarios tested. All SSFW scenarios are ranked and analysed in regards to fuel consumption; sensitivity analysis is conducted to test key assumptions. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Essential amino acids: master regulators of nutrition and environmental footprint?
Tessari, Paolo; Lante, Anna; Mosca, Giuliano
2016-05-25
The environmental footprint of animal food production is considered several-fold greater than that of crops cultivation. Therefore, the choice between animal and vegetarian diets may have a relevant environmental impact. In such comparisons however, an often neglected issue is the nutritional value of foods. Previous estimates of nutrients' environmental footprint had predominantly been based on either food raw weight or caloric content, not in respect to human requirements. Essential amino acids (EAAs) are key parameters in food quality assessment. We re-evaluated here the environmental footprint (expressed both as land use for production and as Green House Gas Emission (GHGE), of some animal and vegetal foods, titrated to provide EAAs amounts in respect to human requirements. Production of high-quality animal proteins, in amounts sufficient to match the Recommended Daily Allowances of all the EAAs, would require a land use and a GHGE approximately equal, greater o smaller (by only ±1-fold), than that necessary to produce vegetal proteins, except for soybeans, that exhibited the smallest footprint. This new analysis downsizes the common concept of a large advantage, in respect to environmental footprint, of crops vs. animal foods production, when human requirements of EAAs are used for reference.
Sunk cost and work ethic effects reflect suboptimal choice between different work requirements.
Magalhães, Paula; White, K Geoffrey
2013-03-01
We investigated suboptimal choice between different work requirements in pigeons (Columba livia), namely the sunk cost effect, an irrational tendency to persist with an initial investment, despite the availability of a better option. Pigeons chose between two keys, one with a fixed work requirement to food of 20 pecks (left key), and the other with a work requirement to food which varied across conditions (center key). On some trials within each session, such choices were preceded by an investment of 35 pecks on the center key, whereas on others they were not. On choice trials preceded by the investment, the pigeons tended to stay and complete the schedule associated with the center key, even when the number of pecks to obtain reward was greater than for the concurrently available left key. This result indicates that pigeons, like humans, commit the sunk cost effect. With higher work requirements, this preference was extended to trials where there was no initial investment, so an overall preference for the key associated with more work was evident, consistent with the work ethic effect. We conclude that a more general work ethic effect is amplified by the effect of the prior investment, that is, the sunk cost effect. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Farhadi, L.; Abdolghafoorian, A.
2015-12-01
The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states
NASA Astrophysics Data System (ADS)
Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael
1999-03-01
Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.
NASA Astrophysics Data System (ADS)
Lücking, Charlotte; Colombo, Camilla; McInnes, Colin R.
2012-08-01
The growing population of space debris poses a serious risk to the future of space flight. To effectively manage the increase of debris in orbit, end-of life disposal has become a key requirement for future missions. This poses a challenge for Medium Earth Orbit (MEO) spacecraft which require a large Δv to re-enter the atmosphere or reach the geostationary graveyard orbit. This paper further explores a passive strategy based on the joint effects of solar radiation pressure and the Earth's oblateness acting on a high area-to-mass-ratio object. The concept was previously presented as an analytical planar model. This paper uses a full 3D model to validate the analytical results numerically for equatorial circular orbits first, then investigating higher inclinations. It is shown that for higher inclinations the initial position of the Sun and right ascension of the ascending node become increasingly important. A region of very low required area-to-mass-ratio is identified in the parameter space of semi-major axis and inclination which occurs for altitudes below 10,000 km.
NASA Technical Reports Server (NTRS)
Bloomfield, Harvey S.; Heller, Jack A.
1987-01-01
A preliminary feasibility assessment of the integration of reactor power system concepts with a projected growth space station architecture was conducted to address a variety of installation, operational disposition, and safety issues. A previous NASA sponsored study, which showed the advantages of space station - attached concepts, served as the basis for this study. A study methodology was defined and implemented to assess compatible combinations of reactor power installation concepts, disposal destinations, and propulsion methods. Three installation concepts that met a set of integration criteria were characterized from a configuration and operational viewpoint, with end-of-life disposal mass identified. Disposal destinations that met current aerospace nuclear safety criteria were identified and characterized from an operational and energy requirements viewpoint, with delta-V energy requirement as a key parameter. Chemical propulsion methods that met current and near-term application criteria were identified and payload mass and delta-V capabilities were characterized. These capabilities were matched against concept disposal mass and destination delta-V requirements to provide the feasibility of each combination.
A feasibility assessment of nuclear reactor power system concepts for the NASA Growth Space Station
NASA Technical Reports Server (NTRS)
Bloomfield, H. S.; Heller, J. A.
1986-01-01
A preliminary feasibility assessment of the integration of reactor power system concepts with a projected growth Space Station architecture was conducted to address a variety of installation, operational, disposition and safety issues. A previous NASA sponsored study, which showed the advantages of Space Station - attached concepts, served as the basis for this study. A study methodology was defined and implemented to assess compatible combinations of reactor power installation concepts, disposal destinations, and propulsion methods. Three installation concepts that met a set of integration criteria were characterized from a configuration and operational viewpoint, with end-of-life disposal mass identified. Disposal destinations that met current aerospace nuclear safety criteria were identified and characterized from an operational and energy requirements viewpoint, with delta-V energy requirement as a key parameter. Chemical propulsion methods that met current and near-term application criteria were identified and payload mass and delta-V capabilities were characterized. These capabilities were matched against concept disposal mass and destination delta-V requirements to provide a feasibility of each combination.
Shuttle S-band communications technical concepts
NASA Technical Reports Server (NTRS)
Seyl, J. W.; Seibert, W. W.; Porter, J. A.; Eggers, D. S.; Novosad, S. W.; Vang, H. A.; Lenett, S. D.; Lewton, W. A.; Pawlowski, J. F.
1985-01-01
Using the S-band communications system, shuttle orbiter can communicate directly with the Earth via the Ground Spaceflight Tracking and Data Network (GSTDN) or via the Tracking and Data Relay Satellite System (TDRSS). The S-band frequencies provide the primary links for direct Earth and TDRSS communications during all launch and entry/landing phases of shuttle missions. On orbit, S-band links are used when TDRSS Ku-band is not available, when conditions require orbiter attitudes unfavorable to Ku-band communications, or when the payload bay doors are closed. the S-band communications functional requirements, the orbiter hardware configuration, and the NASA S-band communications network are described. The requirements and implementation concepts which resulted in techniques for shuttle S-band hardware development discussed include: (1) digital voice delta modulation; (2) convolutional coding/Viterbi decoding; (3) critical modulation index for phase modulation using a Costas loop (phase-shift keying) receiver; (4) optimum digital data modulation parameters for continuous-wave frequency modulation; (5) intermodulation effects of subcarrier ranging and time-division multiplexing data channels; (6) radiofrequency coverage; and (7) despreading techniques under poor signal-to-noise conditions. Channel performance is reviewed.
Reliability and performance evaluation of systems containing embedded rule-based expert systems
NASA Technical Reports Server (NTRS)
Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.
1989-01-01
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.
The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot
NASA Astrophysics Data System (ADS)
Hwang, Donghyeok; Tahk, Min-Jea
2018-04-01
The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.
Solar oxidation and removal of arsenic--Key parameters for continuous flow applications.
Gill, L W; O'Farrell, C
2015-12-01
Solar oxidation to remove arsenic from water has previously been investigated as a batch process. This research has investigated the kinetic parameters for the design of a continuous flow solar reactor to remove arsenic from contaminated groundwater supplies. Continuous flow recirculated batch experiments were carried out under artificial UV light to investigate the effect of different parameters on arsenic removal efficiency. Inlet water arsenic concentrations of up to 1000 μg/L were reduced to below 10 μg/L requiring 12 mg/L iron after receiving 12 kJUV/L radiation. Citrate however was somewhat surprisingly found to promote a detrimental effect on the removal process in the continuous flow reactor studies which is contrary to results found in batch scale tests. The impact of other typical water groundwater quality parameters (phosphate and silica) on the process due to their competition with arsenic for photooxidation products revealed a much higher sensitivity to phosphate ions compared to silicate. Other results showed no benefit from the addition of TiO2 photocatalyst but enhanced arsenic removal at higher temperatures up to 40 °C. Overall, these results have indicated the kinetic envelope from which a continuous flow SORAS single pass system could be more confidently designed for a full-scale community groundwater application at a village level. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improving information retrieval in functional analysis.
Rodriguez, Juan C; González, Germán A; Fresno, Cristóbal; Llera, Andrea S; Fernández, Elmer A
2016-12-01
Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve information retrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Thermodynamic Derivation of the Activation Energy for Ice Nucleation
NASA Technical Reports Server (NTRS)
Barahona, D.
2015-01-01
Cirrus clouds play a key role in the radiative and hydrological balance of the upper troposphere. Their correct representation in atmospheric models requires an understanding of the microscopic processes leading to ice nucleation. A key parameter in the theoretical description of ice nucleation is the activation energy, which controls the flux of water molecules from the bulk of the liquid to the solid during the early stages of ice formation. In most studies it is estimated by direct association with the bulk properties of water, typically viscosity and self-diffusivity. As the environment in the ice-liquid interface may differ from that of the bulk, this approach may introduce bias in calculated nucleation rates. In this work a theoretical model is proposed to describe the transfer of water molecules across the ice-liquid interface. Within this framework the activation energy naturally emerges from the combination of the energy required to break hydrogen bonds in the liquid, i.e., the bulk diffusion process, and the work dissipated from the molecular rearrangement of water molecules within the ice-liquid interface. The new expression is introduced into a generalized form of classical nucleation theory. Even though no nucleation rate measurements are used to fit any of the parameters of the theory the predicted nucleation rate is in good agreement with experimental results, even at temperature as low as 190 K, where it tends to be underestimated by most models. It is shown that the activation energy has a strong dependency on temperature and a weak dependency on water activity. Such dependencies are masked by thermodynamic effects at temperatures typical of homogeneous freezing of cloud droplets; however, they may affect the formation of ice in haze aerosol particles. The new model provides an independent estimation of the activation energy and the homogeneous ice nucleation rate, and it may help to improve the interpretation of experimental results and the development of parameterizations for cloud formation.
NASA Technical Reports Server (NTRS)
Abney, Morgan B.; Perry, Jay L.
2016-01-01
Over the last 55 years, NASA has evolved life support for crewed space exploration vehicles from simple resupply during Project Mercury to the complex and highly integrated system of systems aboard the International Space Station. As NASA targets exploration destinations farther from low Earth orbit and mission durations of 500 to 1000 days, life support systems must evolve to meet new requirements. In addition to having more robust, reliable, and maintainable hardware, limiting resupply becomes critical for managing mission logistics and cost. Supplying a crew with the basics of food, water, and oxygen become more challenging as the destination ventures further from Earth. Aboard ISS the Atmosphere Revitalization Subsystem (ARS) supplies the crew's oxygen demand by electrolyzing water. This approach makes water a primary logistics commodity that must be managed carefully. Chemical reduction of metabolic carbon dioxide (CO2) provides a method of recycling oxygen thereby reducing the net ARS water demand and therefore minimizing logistics needs. Multiple methods have been proposed to achieve this recovery and have been reported in the literature. However, depending on the architecture and the technology approach, "oxygen recovery" can be defined in various ways. This discontinuity makes it difficult to compare technologies directly. In an effort to clarify community discussions of Oxygen Recovery, we propose specific definitions and describe the methodology used to arrive at those definitions. Additionally, we discuss key performance parameters for Oxygen Recovery technology development including challenges with comparisons to state-of-the-art.
Etanercept (Enbrel®) alternative storage at ambient temperature.
Shannon, Edel; Daffy, Joanne; Jones, Heather; Paulson, Andrea; Vicik, Steven M
2017-01-01
Biologic disease-modifying antirheumatic drugs, including tumor necrosis factor inhibitors such as etanercept (Enbrel ® ), have improved outcomes for patients with rheumatic and other inflammatory diseases, with sustained remission being the optimal goal for patients with rheumatoid arthritis. Flexible and convenient treatment options, compatible with modern lifestyle, are important in helping patients maintain treatment and manage their disease. Etanercept drug product (DP) is available in lyophilized powder (Lyo) for solution injection, prefilled syringe, and prefilled pen presentations and is typically stored under refrigerated conditions. We aimed to generate a comprehensive analytical data package from stability testing of key quality attributes, consistent with regulatory requirements, to determine whether the product profile of etanercept is maintained at ambient temperature. Test methods assessing key attributes of purity, quality, potency, and safety were performed over time, following storage of etanercept DP presentations under a range of conditions. Results and statistical analysis from stability testing (based on size exclusion high-performance liquid chromatography, hydrophobic interaction chromatography, and sodium dodecyl sulfate-polyacrylamide gel electrophoresis Coomassie) across all etanercept presentations (10 and 25 mg/vial Lyo DP; 25 and 50 mg prefilled syringe DP; 50 mg prefilled pen DP) showed key stability-indicating parameters were within acceptable limits through the alternative storage condition of 25°C±2°C for 1 month. Stability testing performed in line with regulatory requirements supports a single period of storage for etanercept DP at an alternative storage condition of 25°C±2°C for up to 1 month within the approved expiry of the product. This alternative storage condition represents further innovation in the etanercept product lifecycle, providing greater flexibility and enhanced overall convenience for patients.
Chahal, Manjit; Celler, George K; Jaluria, Yogesh; Jiang, Wei
2012-02-13
Employing a semi-analytic approach, we study the influence of key structural and optical parameters on the thermo-optic characteristics of photonic crystal waveguide (PCW) structures on a silicon-on-insulator (SOI) platform. The power consumption and spatial temperature profile of such structures are given as explicit functions of various structural, thermal and optical parameters, offering physical insight not available in finite-element simulations. Agreement with finite-element simulations and experiments is demonstrated. Thermal enhancement of the air-bridge structure is analyzed. The practical limit of thermo-optic switching power in slow light PCWs is discussed, and the scaling with key parameters is analyzed. Optical switching with sub-milliwatt power is shown viable.
Fast adaptive optical system for the high-power laser beam correction in atmosphere
NASA Astrophysics Data System (ADS)
Kudryashov, Alexis; Lylova, Anna; Samarkin, Vadim; Sheldakova, Julia; Alexandrov, Alexander
2017-09-01
Key elements of the fast adaptive optical system (AOS), having correction frequency of 1400 Hz, for atmospheric turbulence compensation, are described in this paper. A water-cooled bimorph deformable mirror with 46 electrodes, as well as stacked actuator deformable mirror with 81 piezoactuators and 2000 Hz Shack-Hartmann wavefront sensor were considered to be used to control the light beam. The parameters of the turbulence at the 1.2 km path of the light propagation were measured and analyzed. The key parameters for such an adaptive system were worked out.
Key aspects of cost effective collector and solar field design
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Nicodemo, Dario; Keck, Thomas; Weinrebe, Gerhard; Balz, Markus
2016-05-01
A study has been performed where different key parameters influencing solar field cost are varied. By using levelised cost of energy as figure of merit it is shown that parameters like GoToStow wind speed, heliostat stiffness or tower height should be adapted to respective site conditions from an economical point of view. The benchmark site Redstone (Northern Cape Province, South Africa) has been compared to an alternate site close to Phoenix (AZ, USA) regarding site conditions and their effect on cost-effective collector and solar field design.
Security of Y-00 and Similar Quantum Cryptographic Protocols
2004-11-16
security of Y-00 type protocols is clarified. Key words: Quantum cryptography PACS: 03.67.Dd Anew approach to quantum cryptog- raphy called KCQ, ( keyed ...classical- noise key generation [2] or the well known BB84 quantum protocol [3]. A special case called αη (or Y-00 in Japan) has been experimentally in... quantum noise for typical op- erating parameters. It weakens both the data and key security , possibly information-theoretically and cer- tainly
Fortier, Sylvie; Basset, Fabien A.; Mbourou, Ginette A.; Favérial, Jérôme; Teasdale, Normand
2005-01-01
The purpose of this study was twofold: (a) to examine if kinetic and kinematic parameters of the sprint start could differentiate elite from sub-elite sprinters and, (b) to investigate whether providing feedback (FB) about selected parameters could improve starting block performance of intermediate sprinters over a 6-week training period. Twelve male sprinters, assigned to an elite or a sub-elite group, participated in Experiment 1. Eight intermediate sprinters participated in Experiment 2. All athletes were required to perform three sprint starts at maximum intensity followed by a 10-m run. To detect differences between elite and sub-elite groups, comparisons were made using t-tests for independent samples. Parameters reaching a significant group difference were retained for the linear discriminant analysis (LDA). The LDA yielded four discriminative kinetic parameters. Feedback about these selected parameters was given to sprinters in Experiment 2. For this experiment, data acquisition was divided into three periods. The first six sessions were without specific FB, whereas the following six sessions were enriched by kinetic FB. Finally, athletes underwent a retention session (without FB) 4 weeks after the twelfth session. Even though differences were found in the time to front peak force, the time to rear peak force, and the front peak force in the retention session, the results of the present study showed that providing FB about selected kinetic parameters differentiating elite from sub-elite sprinters did not improve the starting block performance of intermediate sprinters. Key Points The linear discriminative analysis allows the identification of starting block parameters differentiating elite from sub-elite athletes. 6-week of feedback does not alter starting block performance in training context. The present results failed to confirm previous studies since feedback did not improve targeted kinetic parameters of the complex motor task in real-world context. PMID:24431969
Fortier, Sylvie; Basset, Fabien A; Mbourou, Ginette A; Favérial, Jérôme; Teasdale, Normand
2005-06-01
(a) to examine if kinetic and kinematic parameters of the sprint start could differentiate elite from sub-elite sprinters and, (b) to investigate whether providing feedback (FB) about selected parameters could improve starting block performance of intermediate sprinters over a 6-week training period. Twelve male sprinters, assigned to an elite or a sub-elite group, participated in Experiment 1. Eight intermediate sprinters participated in Experiment 2. All athletes were required to perform three sprint starts at maximum intensity followed by a 10-m run. To detect differences between elite and sub-elite groups, comparisons were made using t-tests for independent samples. Parameters reaching a significant group difference were retained for the linear discriminant analysis (LDA). The LDA yielded four discriminative kinetic parameters. Feedback about these selected parameters was given to sprinters in Experiment 2. For this experiment, data acquisition was divided into three periods. The first six sessions were without specific FB, whereas the following six sessions were enriched by kinetic FB. Finally, athletes underwent a retention session (without FB) 4 weeks after the twelfth session. Even though differences were found in the time to front peak force, the time to rear peak force, and the front peak force in the retention session, the results of the present study showed that providing FB about selected kinetic parameters differentiating elite from sub-elite sprinters did not improve the starting block performance of intermediate sprinters. Key PointsThe linear discriminative analysis allows the identification of starting block parameters differentiating elite from sub-elite athletes.6-week of feedback does not alter starting block performance in training context.The present results failed to confirm previous studies since feedback did not improve targeted kinetic parameters of the complex motor task in real-world context.
Advanced Integrated Display System V/STOL Program Performance Specification. Volume I.
1980-06-01
sensor inputs required before the sensor can be designated acceptable. The reactivation count of each sensor parameter which satisfies its veri...129 3.5.2 AIDS Configuration Parameters .............. 133 3.5.3 AIDS Throughput Requirements ............... 133 4 QUALITY ASSURANCE...lists the adaptation parameters of the AIDS software; these parameters include the throughput and memory requirements of the software. 3.2 SYSTEM
Research on Product Conceptual Design Based on Integrated of TRIZ and HOQ
NASA Astrophysics Data System (ADS)
Xie, Jianmin; Tang, Xiaowo; Shao, Yunfei
The conceptual design determines the success of the final product quality and competition of market. The determination of design parameters and the effective method to resolve parameters contradiction are the key to success. In this paper, the concept of HOQ products designed to determine the parameters, then using the TRIZ contradiction matrix and inventive principles of design parameters to solve the problem of contradictions. Facts have proved that the effective method is to obtain the product concept design parameters and to resolve contradictions line parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bignan, G.; Gonnier, C.; Lyoussi, A.
2015-07-01
Research and development on fuel and material behaviour under irradiation is a key issue for sustainable nuclear energy in order to meet specific needs by keeping the best level of safety. These needs mainly deal with a constant improvement of performances and safety in order to optimize the fuel cycle and hence to reach nuclear energy sustainable objectives. A sustainable nuclear energy requires a high level of performances in order to meet specific needs such as: - Pursuing improvement of the performances and safety of present and coming water cooled reactor technologies. This will require a continuous R and Dmore » support following a long-term trend driven by the plant life management, safety demonstration, flexibility and economics improvement. Experimental irradiations of structure materials are necessary to anticipate these material behaviours and will contribute to their optimisation. - Upgrading continuously nuclear fuel technology in present and future nuclear power plants to achieve better performances and to optimise the fuel cycle keeping the best level of safety. Fuel evolution for generation II, III and III+ is a key stake requiring developments, qualification tests and safety experiments to ensure the competitiveness and safety: experimental tests exploring the full range of fuel behaviour determine fuel stability limits and safety margins, as a major input for the fuel reliability analysis. To perform such accurate and innovative progress and developments, specific and ad hoc instrumentation, irradiation devices, measurement methods are necessary to be set up inside or beside the material testing reactor (MTR) core. These experiments require beforehand in situ and on line sophisticated measurements to accurately determine different key parameters such as thermal and fast neutron fluxes and nuclear heating in order to precisely monitor and control the conducted assays. The new Material Testing Reactor JHR (Jules Horowitz Reactor) currently under construction at CEA Cadarache research centre in the south of France will represent a major Research Infrastructure for scientific studies regarding material and fuel behavior under irradiation. It will also be devoted to medical isotopes production. Hence JHR will offer a real opportunity to perform R and D programs regarding needs above and hence will crucially contribute to the selection, optimization and qualification of these innovative materials and fuels. The JHR reactor objectives, principles and main characteristics associated to specific experimental devices associated to measurement techniques and methodology, their performances, their limitations and field of applications will be presented and discussed. (authors)« less
The role of series ankle elasticity in bipedal walking
Zelik, Karl E.; Huang, Tzu-Wei P.; Adamczyk, Peter G.; Kuo, Arthur D.
2014-01-01
The elastic stretch-shortening cycle of the Achilles tendon during walking can reduce the active work demands on the plantarflexor muscles in series. However, this does not explain why or when this ankle work, whether by muscle or tendon, needs to be performed during gait. We therefore employ a simple bipedal walking model to investigate how ankle work and series elasticity impact economical locomotion. Our model shows that ankle elasticity can use passive dynamics to aid push-off late in single support, redirecting the body's center-of-mass (COM) motion upward. An appropriately timed, elastic push-off helps to reduce dissipative collision losses at contralateral heelstrike, and therefore the positive work needed to offset those losses and power steady walking. Thus, the model demonstrates how elastic ankle work can reduce the total energetic demands of walking, including work required from more proximal knee and hip muscles. We found that the key requirement for using ankle elasticity to achieve economical gait is the proper ratio of ankle stiffness to foot length. Optimal combination of these parameters ensures proper timing of elastic energy release prior to contralateral heelstrike, and sufficient energy storage to redirect the COM velocity. In fact, there exist parameter combinations that theoretically yield collision-free walking, thus requiring zero active work, albeit with relatively high ankle torques. Ankle elasticity also allows the hip to power economical walking by contributing indirectly to push-off. Whether walking is powered by the ankle or hip, ankle elasticity may aid walking economy by reducing collision losses. PMID:24365635
The role of series ankle elasticity in bipedal walking.
Zelik, Karl E; Huang, Tzu-Wei P; Adamczyk, Peter G; Kuo, Arthur D
2014-04-07
The elastic stretch-shortening cycle of the Achilles tendon during walking can reduce the active work demands on the plantarflexor muscles in series. However, this does not explain why or when this ankle work, whether by muscle or tendon, needs to be performed during gait. We therefore employ a simple bipedal walking model to investigate how ankle work and series elasticity impact economical locomotion. Our model shows that ankle elasticity can use passive dynamics to aid push-off late in single support, redirecting the body's center-of-mass (COM) motion upward. An appropriately timed, elastic push-off helps to reduce dissipative collision losses at contralateral heelstrike, and therefore the positive work needed to offset those losses and power steady walking. Thus, the model demonstrates how elastic ankle work can reduce the total energetic demands of walking, including work required from more proximal knee and hip muscles. We found that the key requirement for using ankle elasticity to achieve economical gait is the proper ratio of ankle stiffness to foot length. Optimal combination of these parameters ensures proper timing of elastic energy release prior to contralateral heelstrike, and sufficient energy storage to redirect the COM velocity. In fact, there exist parameter combinations that theoretically yield collision-free walking, thus requiring zero active work, albeit with relatively high ankle torques. Ankle elasticity also allows the hip to power economical walking by contributing indirectly to push-off. Whether walking is powered by the ankle or hip, ankle elasticity may aid walking economy by reducing collision losses. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Almehmadi, Fares S.; Chatterjee, Monish R.
2014-12-01
Using intensity feedback, the closed-loop behavior of an acousto-optic hybrid device under profiled beam propagation has been recently shown to exhibit wider chaotic bands potentially leading to an increase in both the dynamic range and sensitivity to key parameters that characterize the encryption. In this work, a detailed examination is carried out vis-à-vis the robustness of the encryption/decryption process relative to parameter mismatch for both analog and pulse code modulation signals, and bit error rate (BER) curves are used to examine the impact of additive white noise. The simulations with profiled input beams are shown to produce a stronger encryption key (i.e., much lower parametric tolerance thresholds) relative to simulations with uniform plane wave input beams. In each case, it is shown that the tolerance for key parameters drops by factors ranging from 10 to 20 times below those for uniform plane wave propagation. Results are shown to be at consistently lower tolerances for secure transmission of analog and digital signals using parameter tolerance measures, as well as BER performance measures for digital signals. These results hold out the promise for considerably greater information transmission security for such a system.
Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements
NASA Technical Reports Server (NTRS)
Lyatsky, W.; Khazanov, G. V.
2007-01-01
Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Testing parameter requirements. 761.389 Section 761.389 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC... Under § 761.79(d)(4) § 761.389 Testing parameter requirements. There are no restrictions on the...
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Testing parameter requirements. 761.389 Section 761.389 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC... Under § 761.79(d)(4) § 761.389 Testing parameter requirements. There are no restrictions on the...
NASA Astrophysics Data System (ADS)
Nicolis, John S.; Katsikas, Anastassis A.
Collective parameters such as the Zipf's law-like statistics, the Transinformation, the Block Entropy and the Markovian character are compared for natural, genetic, musical and artificially generated long texts from generating partitions (alphabets) on homogeneous as well as on multifractal chaotic maps. It appears that minimal requirements for a language at the syntactical level such as memory, selectivity of few keywords and broken symmetry in one dimension (polarity) are more or less met by dynamically iterating simple maps or flows e.g. very simple chaotic hardware. The same selectivity is observed at the semantic level where the aim refers to partitioning a set of enviromental impinging stimuli onto coexisting attractors-categories. Under the regime of pattern recognition and classification, few key features of a pattern or few categories claim the lion's share of the information stored in this pattern and practically, only these key features are persistently scanned by the cognitive processor. A multifractal attractor model can in principle explain this high selectivity, both at the syntactical and the semantic levels.
Residual stress evaluation of components produced via direct metal laser sintering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kemerling, Brandon; Lippold, John C.; Fancher, Christopher M.
Direct metal laser sintering is an additive manufacturing process which is capable of fabricating three-dimensional components using a laser energy source and metal powder particles. Despite the numerous benefits offered by this technology, the process maturity is low with respect to traditional subtractive manufacturing methods. Relationships between key processing parameters and final part properties are generally lacking and require further development. In this study, residual stresses were evaluated as a function of key process variables. The variables evaluated included laser scan strategy and build plate preheat temperature. Residual stresses were measured experimentally via neutron diffraction and computationally via finite elementmore » analysis. Good agreement was shown between the experimental and computational results. Results showed variations in the residual stress profile as a function of laser scan strategy. Compressive stresses were dominant along the build height (z) direction, and tensile stresses were dominant in the x and y directions. Build plate preheating was shown to be an effective method for alleviating residual stress due to the reduction in thermal gradient.« less
Residual stress evaluation of components produced via direct metal laser sintering
Kemerling, Brandon; Lippold, John C.; Fancher, Christopher M.; ...
2018-03-22
Direct metal laser sintering is an additive manufacturing process which is capable of fabricating three-dimensional components using a laser energy source and metal powder particles. Despite the numerous benefits offered by this technology, the process maturity is low with respect to traditional subtractive manufacturing methods. Relationships between key processing parameters and final part properties are generally lacking and require further development. In this study, residual stresses were evaluated as a function of key process variables. The variables evaluated included laser scan strategy and build plate preheat temperature. Residual stresses were measured experimentally via neutron diffraction and computationally via finite elementmore » analysis. Good agreement was shown between the experimental and computational results. Results showed variations in the residual stress profile as a function of laser scan strategy. Compressive stresses were dominant along the build height (z) direction, and tensile stresses were dominant in the x and y directions. Build plate preheating was shown to be an effective method for alleviating residual stress due to the reduction in thermal gradient.« less
Radiosounding in the planned mission to Phobos
NASA Astrophysics Data System (ADS)
Zakharov, A. V.; Eismont, N. A.; Gotlib, V. M.; Smirnov, V. M.; Yushkova, O. V.; Marchuk, V. N.
2017-09-01
The opportunities to study Phobos' internal structure provided by radio methods are considered in this paper. The necessity of these studies is related to solution of the problem of the origin of the Martian moons. Radiosounding is one of the most efficient methods of analyzing the internal structure of small space objects and, in particular, that of Phobos. The new Boomerang project planned according to the Federal Space Program of Russia for 2016—2025 within the Expedition-M program aimed at the exploration of Phobos and delivery of soil samples from its surface to the Earth, as well as the specifics of a ballistic scenario of this expedition, provide a unique opportunity to carry out radioscopy of this space object to discover the internal structure Phobos and to solve the key problem of its origin. The model of Phobos' internal structure, radiosounding ballistic conditions, analysis of optimum frequency range of sounding, and key parameters of the device required for the experiment are considered in this paper. The significance of proposed studies and opportunities for their implementation are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
Tool-assisted rhythmic drumming in palm cockatoos shares key elements of human instrumental music
Heinsohn, Robert; Zdenek, Christina N.; Cunningham, Ross B.; Endler, John A.; Langmore, Naomi E.
2017-01-01
All human societies have music with a rhythmic “beat,” typically produced with percussive instruments such as drums. The set of capacities that allows humans to produce and perceive music appears to be deeply rooted in human biology, but an understanding of its evolutionary origins requires cross-taxa comparisons. We show that drumming by palm cockatoos (Probosciger aterrimus) shares the key rudiments of human instrumental music, including manufacture of a sound tool, performance in a consistent context, regular beat production, repeated components, and individual styles. Over 131 drumming sequences produced by 18 males, the beats occurred at nonrandom, regular intervals, yet individual males differed significantly in the shape parameters describing the distribution of their beat patterns, indicating individual drumming styles. Autocorrelation analyses of the longest drumming sequences further showed that they were highly regular and predictable like human music. These discoveries provide a rare comparative perspective on the evolution of rhythmicity and instrumental music in our own species, and show that a preference for a regular beat can have other origins before being co-opted into group-based music and dance. PMID:28782005
Noy, Dror
2008-01-01
The vast structural and functional information database of photosynthetic enzymes includes, in addition to detailed kinetic records from decades of research on physical processes and chemical reaction-pathways, a variety of high and medium resolution crystal structures of key photosynthetic enzymes. Here, it is examined from an engineer's point of view with the long-term goal of reproducing the key features of natural photosystems in novel biological and non-biological solar-energy conversion systems. This survey reveals that the basic physics of the transfer processes, namely, the time constraints imposed by the rates of incoming photon flux and the various decay processes allow for a large degree of tolerance in the engineering parameters. Furthermore, the requirements to guarantee energy and electron transfer rates that yield high efficiency in natural photosystems are largely met by control of distance between chromophores and redox cofactors. This underlines a critical challenge for projected de novo designed constructions, that is, the control of spatial organization of cofactor molecules within dense array of different cofactors, some well within 1 nm from each other.
Flexible graphene transistors for recording cell action potentials
NASA Astrophysics Data System (ADS)
Blaschke, Benno M.; Lottner, Martin; Drieschner, Simon; Bonaccini Calia, Andrea; Stoiber, Karolina; Rousseau, Lionel; Lissourges, Gaëlle; Garrido, Jose A.
2016-06-01
Graphene solution-gated field-effect transistors (SGFETs) are a promising platform for the recording of cell action potentials due to the intrinsic high signal amplification of graphene transistors. In addition, graphene technology fulfills important key requirements for in-vivo applications, such as biocompability, mechanical flexibility, as well as ease of high density integration. In this paper we demonstrate the fabrication of flexible arrays of graphene SGFETs on polyimide, a biocompatible polymeric substrate. We investigate the transistor’s transconductance and intrinsic electronic noise which are key parameters for the device sensitivity, confirming that the obtained values are comparable to those of rigid graphene SGFETs. Furthermore, we show that the devices do not degrade during repeated bending and the transconductance, governed by the electronic properties of graphene, is unaffected by bending. After cell culture, we demonstrate the recording of cell action potentials from cardiomyocyte-like cells with a high signal-to-noise ratio that is higher or comparable to competing state of the art technologies. Our results highlight the great capabilities of flexible graphene SGFETs in bioelectronics, providing a solid foundation for in-vivo experiments and, eventually, for graphene-based neuroprosthetics.
NASA Technical Reports Server (NTRS)
1976-01-01
Additional design and analysis data are provided to supplement the results of the two parallel design study efforts. The key results of the three supplemental tasks investigated are: (1) The velocity duration profile has a significant effect in determining the optimum wind turbine design parameters and the energy generation cost. (2) Modest increases in capacity factor can be achieved with small increases in energy generation costs and capital costs. (3) Reinforced concrete towers that are esthetically attractive can be designed and built at a cost comparable to those for steel truss towers. The approach used, method of analysis, assumptions made, design requirements, and the results for each task are discussed in detail.
Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack
NASA Astrophysics Data System (ADS)
Nalegaev, S. S.; Petrov, N. V.
Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.
NASA Astrophysics Data System (ADS)
Kishkovich, Oleg P.; Bolgov, Dennis; Goodwin, William
1999-06-01
In this paper, the authors discuss the requirements for chemical air filtration system used in conjunction with modern DUV photolithography equipment. Among the topics addressed are the scope of pollutants, their respective internal and external sources, and an overview of different types of filtration technologies currently in use. Key filtration parameters, including removal efficiency, service life, and spill protection capacity, are discussed and supported by actual data, reflection the total molecular base concentration in operational IC manufacturing facilities. The authors also describe a time-accelerated testing procedure for comparing and evaluating different filtration technologies and designs, and demonstrate how this three-day test procedure can reliably predict an effective filter service life up to ten years.
Key Tasks of Science in Improving Effectiveness of Hard Coal Production in Poland
NASA Astrophysics Data System (ADS)
Dubiński, Józef; Prusek, Stanisław; Turek, Marian
2017-09-01
The article presents an array of specific issues regarding the employed technology and operational efficiency of mining activities, which could and should become the subject of conducted scientific research. Given the circumstances of strong market competition and increasing requirements concerning environmental conditions, both in terms of conducted mining activities and produced coal quality parameters, it is imperative to develop and implement innovative solutions regarding the employed production technology, the safety of work conducted under the conditions of increasing natural hazards, as well as the mining enterprise management systems that enable its effective functioning. The article content pertains to the last group of issues in the most detailed way, particularly in terms of the possibility for rational conducted operation cost reduction.
Aerobots as a Ubiquitous Part of Society
NASA Technical Reports Server (NTRS)
Young, Larry A.
2006-01-01
Small autonomous aerial robots (aerobots) have the potential to make significant positive contributions to modern society. Aerobots of various vehicle-types - CTOL, STOL, VTOL, and even possibly LTA - will be a part of a new paradigm for the distribution of goods and services. Aerobots as a class of vehicles may test the boundaries of aircraft design. New system analysis and design tools will be required in order to account for the new technologies and design parameters/constraints for such vehicles. The analysis tools also provide new approaches to defining/assessing technology goals and objectives and the technology portfolio necessary to accomplish those goals and objectives. Using the aerobot concept as an illustrative test case, key attributes of these analysis tools are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Madhu Sudhan; Wang, Zhiqiang
This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changingmore » frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.« less
A review of failure models for unidirectional ceramic matrix composites under monotonic loads
NASA Technical Reports Server (NTRS)
Tripp, David E.; Hemann, John H.; Gyekenyesi, John P.
1989-01-01
Ceramic matrix composites offer significant potential for improving the performance of turbine engines. In order to achieve their potential, however, improvements in design methodology are needed. In the past most components using structural ceramic matrix composites were designed by trial and error since the emphasis of feasibility demonstration minimized the development of mathematical models. To understand the key parameters controlling response and the mechanics of failure, the development of structural failure models is required. A review of short term failure models with potential for ceramic matrix composite laminates under monotonic loads is presented. Phenomenological, semi-empirical, shear-lag, fracture mechanics, damage mechanics, and statistical models for the fast fracture analysis of continuous fiber unidirectional ceramic matrix composites under monotonic loads are surveyed.
Aeolus high energy UV Laser wavelength measurement and frequency stability analysis
NASA Astrophysics Data System (ADS)
Mondin, Linda; Bravetti, Paolo
2017-11-01
The Aeolus mission is part of ESA's Earth Explorer program. The goal of the mission is to determine the first global wind data set in near real time to improve numerical weather prediction models. The only instrument on board Aeolus, Aladin, is a backscatter wind LIDAR in the ultraviolet (UV) frequency domain. Aeolus is a frequency limited mission, inasmuch as it relies on the measure of the backscattered signal frequency shift in order to deduce the wind velocity. As such the frequency stability of the LIDAR laser source is a key parameter for this mission. In the following, the characterization of the laser frequency stability, reproducibility and agility in vacuum shall be reported and compared to the mission requirements.
Post-processing procedure for industrial quantum key distribution systems
NASA Astrophysics Data System (ADS)
Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey
2016-08-01
We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.
Mitochondria are key regulators of cellular energy homeostasis and may play a key role in the mechanisms of neurodegenerative disorders and chemical induced neurotoxicity. However, mitochondrial bioenergetic parameters have not been systematically evaluated within multiple brain ...
The ESA Nanosatellite Beacons for Space Weather Monitoring Study
NASA Astrophysics Data System (ADS)
Hapgood, M.; Eckersley, S.; Lundin, R.; Kluge, M.
2008-09-01
This paper will present final results from this ESA-funded study that has investigated how current and emerging concepts for nanosats may be used to monitor space weather conditions and provide improved access to data needed for space weather services. The study has reviewed requirements developed in previous ESA space weather studies to establish a set of service and measurements requirements appropriate to nanosat solutions. The output is conveniently represented as a set of five distinct classes of nanosat constellations, each in different orbit locations and which can address a specific group of measurement requirements. One example driving requirement for several of the constellations was the need for real-time data reception. Given this background, the study then iterated a set of instrument and spacecraft solutions to address each of the nanosat constellations from the requirements. Indeed, iteration has proved to be a critical aspect of the study. The instrument solutions have driven a refinement of requirements through assessment of whether or not the physical parameters to be measured dictate instrument components too large for a nanosat. In addition, the study has also reviewed miniaturization trends for instruments relevant to space weather monitoring by nanosats, looking at the near, mid and far-term timescales. Within the spacecraft solutions the study reviewed key technology trends relevant to space weather monitoring by nanosats: (a) micro and nano-technology devices for spacecraft communications, navigation, propulsion and power, and (b) development and flight experience with nanosats for science and for engineering demonstration. These requirements and solutions were then subject to an iterative system and mission analysis including key mission design issues (e.g. launch/transfer, mission geometry, instrument accommodation, numbers of spacecraft, communications architectures, de-orbit, nanosat reliability and constellation robustness) and the impact of nanosat fundamental limitations (e.g. mass, volume/size, power, communications). As a result, top-level Strawman mission concepts were developed for each constellation, and ROM costs were derived for programme development, operation and maintenance over a ten-year period. Nanosat reliability and constellation robustness were shown to be a key driver in deriving mission costs. In parallel with the mission analysis the study results have been reviewed to identify key issues that determine the prospects for a space weather nanosat programme and to make recommendations on measures to enable implementation of such a programme. As a follow-on to this study, a student MSc project was initiated by Astrium at Cranfield University to analyse a potential space weather precursor demonstration mission in GTO (one of the recommendations from this ESA study), composing of a reduced constellation of nanosats, launched on ASAP or some other low cost method. The demonstration would include: 1/ Low cost multiple manufacture techniques for a fully industrial nanosat constellation programme 2/ Real time datalinks and fully operational mission for space weather 3/ Miniaturised payloads to fit in a nanosat for space weather monitoring: 4/ Other possible demonstrations of advanced technology The aim was to comply with ESA demonstration mission (i.e. PROBA-type) requirements, to be representative on issues such as cost and risk
Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin
2018-02-19
At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.
A simple method for simulating wind profiles in the boundary layer of tropical cyclones
Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; ...
2016-11-01
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method alsomore » requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Lastly, temporal spectra from LES produce an inertial subrange for frequencies ≳0.1 Hz, but only when the horizontal grid spacing ≲20 m.« less
The value of swarm data for practical modeling of plasma devices
NASA Astrophysics Data System (ADS)
Napartovich, A. P.; Kochetov, I. V.
2011-04-01
The non-thermal plasma is a key component in gas lasers, waste gas cleaners, ozone generators, plasma igniters, flame holders, flow control in high-speed aerodynamics and other applications. The specific feature of the non-thermal plasma is its high sensitivity to variations in governing parameters (gas composition, pressure, pulse duration, E/N parameter). The reactivity of the plasma is due to the appearance of atoms and chemical radicals. For the efficient production of chemically active species high average electron energy is required, which is controlled by the balance of gain from the electric field and loss in inelastic collisions. In low-ionized plasma the electron energy distribution function is far from Maxwellian and must be found numerically for specified conditions. Numerical modeling of processes in plasma technologies requires vast databases on electron scattering cross sections to be available. The only reliable criterion for evaluations of validity of a set of cross sections for a particular species is a correct prediction of electron transport and kinetic coefficients measured in swarm experiments. This criterion is used traditionally to improve experimentally measured cross sections, as was suggested earlier by Phelps. The set of cross sections subjected to this procedure is called a self-consistent set. Nowadays, such reliable self-consistent sets are known for many species. Problems encountered in implementation of the fitting procedure and examples of its successful applications are described in the paper. .
Papantoniou Ir, Ioannis; Chai, Yoke Chin; Luyten, Frank P; Schrooten Ir, Jan
2013-08-01
The incorporation of Quality-by-Design (QbD) principles in tissue-engineering bioprocess development toward clinical use will ensure that manufactured constructs possess prerequisite quality characteristics addressing emerging regulatory requirements and ensuring the functional in vivo behavior. In this work, the QbD principles were applied on a manufacturing process step for the in vitro production of osteogenic three-dimensional (3D) hybrid scaffolds that involves cell matrix deposition on a 3D titanium (Ti) alloy scaffold. An osteogenic cell source (human periosteum-derived cells) cultured in a bioinstructive medium was used to functionalize regular Ti scaffolds in a perfusion bioreactor, resulting in an osteogenic hybrid carrier. A two-level three-factor fractional factorial design of experiments was employed to explore a range of production-relevant process conditions by simultaneously changing value levels of the following parameters: flow rate (0.5-2 mL/min), cell culture duration (7-21 days), and cell-seeding density (1.5×10(3)-3×10(3) cells/cm(2)). This approach allowed to evaluate the individual impact of the aforementioned process parameters upon key quality attributes of the produced hybrids, such as collagen production, mineralization level, and cell number. The use of a fractional factorial design approach helped create a design space in which hybrid scaffolds of predefined quality attributes may be robustly manufactured while minimizing the number of required experiments.
Bellasio, Chandra; Beerling, David J; Griffiths, Howard
2016-06-01
The higher photosynthetic potential of C4 plants has led to extensive research over the past 50 years, including C4 -dominated natural biomes, crops such as maize, or for evaluating the transfer of C4 traits into C3 lineages. Photosynthetic gas exchange can be measured in air or in a 2% Oxygen mixture using readily available commercial gas exchange and modulated PSII fluorescence systems. Interpretation of these data, however, requires an understanding (or the development) of various modelling approaches, which limit the use by non-specialists. In this paper we present an accessible summary of the theory behind the analysis and derivation of C4 photosynthetic parameters, and provide a freely available Excel Fitting Tool (EFT), making rigorous C4 data analysis accessible to a broader audience. Outputs include those defining C4 photochemical and biochemical efficiency, the rate of photorespiration, bundle sheath conductance to CO2 diffusion and the in vivo biochemical constants for PEP carboxylase. The EFT compares several methodological variants proposed by different investigators, allowing users to choose the level of complexity required to interpret data. We provide a complete analysis of gas exchange data on maize (as a model C4 organism and key global crop) to illustrate the approaches, their analysis and interpretation. © 2015 John Wiley & Sons Ltd. © 2016 John Wiley & Sons Ltd.
A Novel Method for Precise Onboard Real-Time Orbit Determination with a Standalone GPS Receiver.
Wang, Fuhong; Gong, Xuewen; Sang, Jizhang; Zhang, Xiaohong
2015-12-04
Satellite remote sensing systems require accurate, autonomous and real-time orbit determinations (RTOD) for geo-referencing. Onboard Global Positioning System (GPS) has widely been used to undertake such tasks. In this paper, a novel RTOD method achieving decimeter precision using GPS carrier phases, required by China's HY2A and ZY3 missions, is presented. A key to the algorithm success is the introduction of a new parameter, termed pseudo-ambiguity. This parameter combines the phase ambiguity, the orbit, and clock offset errors of the GPS broadcast ephemeris together to absorb a large part of the combined error. Based on the analysis of the characteristics of the orbit and clock offset errors, the pseudo-ambiguity can be modeled as a random walk, and estimated in an extended Kalman filter. Experiments of processing real data from HY2A and ZY3, simulating onboard operational scenarios of these two missions, are performed using the developed software SATODS. Results have demonstrated that the position and velocity accuracy (3D RMS) of 0.2-0.4 m and 0.2-0.4 mm/s, respectively, are achieved using dual-frequency carrier phases for HY2A, and slightly worse results for ZY3. These results show it is feasible to obtain orbit accuracy at decimeter level of 3-5 dm for position and 0.3-0.5 mm/s for velocity with this RTOD method.
A Simple Method for Simulating Wind Profiles in the Boundary Layer of Tropical Cyclones
NASA Astrophysics Data System (ADS)
Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; Zhang, Jun A.
2017-03-01
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method also requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Temporal spectra from LES produce an inertial subrange for frequencies ≳ 0.1 Hz, but only when the horizontal grid spacing ≲ 20 m.
Code of Federal Regulations, 2010 CFR
2010-07-01
... must I collect with my continuous parameter monitoring systems and is this requirement enforceable? 62... with my continuous parameter monitoring systems and is this requirement enforceable? (a) Where continuous parameter monitoring systems are used, obtain 1-hour arithmetic averages for three parameters: (1...
Code of Federal Regulations, 2011 CFR
2011-07-01
... must I collect with my continuous parameter monitoring systems and is this requirement enforceable? 62... with my continuous parameter monitoring systems and is this requirement enforceable? (a) Where continuous parameter monitoring systems are used, obtain 1-hour arithmetic averages for three parameters: (1...
FIELD MEASUREMENT OF DISSOLVED OXYGEN: A COMPARISON OF TECHNIQUES
The measurement and interpretation of geochemical redox parameters are key components of ground water remedial investigations. Dissolved oxygen (DO) is perhaps the most robust geochemical parameter in redox characterization; however, recent work has indicated a need for proper da...
Investigation of parameters affecting treatment time in MRI-guided transurethral ultrasound therapy
NASA Astrophysics Data System (ADS)
N'Djin, W. A.; Burtnyk, M.; Chopra, R.; Bronskill, M. J.
2010-03-01
MRI-guided transurethral ultrasound therapy shows promise for minimally invasive treatment of localized prostate cancer. Real-time MR temperature feedback enables the 3D control of thermal therapy to define an accurate region within the prostate. Previous in-vivo canine studies showed the feasibility of this method using transurethral planar transducers. The aim of this simulation study was to reduce the procedure time, while maintaining treatment accuracy by investigating new combinations of treatment parameters. A numerical model was used to simulate a multi-element heating applicator rotating inside the urethra in 10 human prostates. Acoustic power and rotation rate were varied based on the feedback of the temperature in the prostate. Several parameters were investigated for improving the treatment time. Maximum acoustic power and rotation rate were optimized interdependently as a function of prostate radius and transducer operating frequency, while avoiding temperatures >90° C in the prostate. Other trials were performed on each parameter separately, with the other parameter fixed. The concept of using dual-frequency transducers was studied, using the fundamental frequency or the 3rd harmonic component depending on the prostate radius. The maximum acoustic power which could be used decreased as a function of the prostate radius and the frequency. Decreasing the frequency (9.7-3.0 MHz) or increasing the power (10-20 W.cm-2) led to treatment times shorter by up to 50% under appropriate conditions. Dual-frequency configurations, while helpful, tended to have less impact on treatment times. Treatment accuracy was maintained and critical adjacent tissues like the rectal wall remained protected. The interdependence between power and frequency may require integrating multi-parametric functions inside the controller for future optimizations. As a first approach, however, even slight modifications of key parameters can be sufficient to reduce treatment time.
An empirical-statistical model for laser cladding of Ti-6Al-4V powder on Ti-6Al-4V substrate
NASA Astrophysics Data System (ADS)
Nabhani, Mohammad; Razavi, Reza Shoja; Barekat, Masoud
2018-03-01
In this article, Ti-6Al-4V powder alloy was directly deposited on Ti-6Al-4V substrate using laser cladding process. In this process, some key parameters such as laser power (P), laser scanning rate (V) and powder feeding rate (F) play important roles. Using linear regression analysis, this paper develops the empirical-statistical relation between these key parameters and geometrical characteristics of single clad tracks (i.e. clad height, clad width, penetration depth, wetting angle, and dilution) as a combined parameter (PαVβFγ). The results indicated that the clad width linearly depended on PV-1/3 and powder feeding rate had no effect on it. The dilution controlled by a combined parameter as VF-1/2 and laser power was a dispensable factor. However, laser power was the dominant factor for the clad height, penetration depth, and wetting angle so that they were proportional to PV-1F1/4, PVF-1/8, and P3/4V-1F-1/4, respectively. Based on the results of correlation coefficient (R > 0.9) and analysis of residuals, it was confirmed that these empirical-statistical relations were in good agreement with the measured values of single clad tracks. Finally, these relations led to the design of a processing map that can predict the geometrical characteristics of the single clad tracks based on the key parameters.
Physics design of the injector source for ITER neutral beam injector (invited).
Antoni, V; Agostinetti, P; Aprile, D; Cavenago, M; Chitarin, G; Fonnesu, N; Marconato, N; Pilan, N; Sartori, E; Serianni, G; Veltri, P
2014-02-01
Two Neutral Beam Injectors (NBI) are foreseen to provide a substantial fraction of the heating power necessary to ignite thermonuclear fusion reactions in ITER. The development of the NBI system at unprecedented parameters (40 A of negative ion current accelerated up to 1 MV) requires the realization of a full scale prototype, to be tested and optimized at the Test Facility under construction in Padova (Italy). The beam source is the key component of the system and the design of the multi-grid accelerator is the goal of a multi-national collaborative effort. In particular, beam steering is a challenging aspect, being a tradeoff between requirements of the optics and real grids with finite thickness and thermo-mechanical constraints due to the cooling needs and the presence of permanent magnets. In the paper, a review of the accelerator physics and an overview of the whole R&D physics program aimed to the development of the injector source are presented.
NASA Technical Reports Server (NTRS)
Foldes, P.
1986-01-01
The instrumentation problems associated with the measurement of soil moisture with a meaningful spatial and temperature resolution at a global scale are addressed. For this goal only medium term available affordable technology will be considered. The study while limited in scope, will utilize a large scale antenna structure, which is being developed presently as an experimental model. The interface constraints presented by a singel Space Transportation System (STS) flight will be assumed. Methodology consists of the following steps: review of science requirements; analyze effects of these requirements; present basic system engineering considerations and trade-offs related to orbit parameters, number of spacecraft and their lifetime, observation angles, beamwidth, crossover and swath, coverage percentage, beam quality and resolution, instrument quantities, and integration time; bracket the key system characteristics and develop an electromagnetic design of the antenna-passive radiometer system. Several aperture division combinations and feed array concepts are investigated to achieve maximum feasible performacne within the stated STS constraints.
Short-term storage allocation in a filmless hospital
NASA Astrophysics Data System (ADS)
Strickland, Nicola H.; Deshaies, Marc J.; Reynolds, R. Anthony; Turner, Jonathan E.; Allison, David J.
1997-05-01
Optimizing limited short term storage (STS) resources requires gradual, systematic changes, monitored and modified within an operational PACS environment. Optimization of the centralized storage requires a balance of exam numbers and types in STS to minimize lengthy retrievals from long term archive. Changes to STS parameters and work procedures were made while monitoring the effects on resource allocation by analyzing disk space temporally. Proportions of disk space allocated to each patient category on STS were measured to approach the desired proportions in a controlled manner. Key factors for STS management were: (1) sophisticated exam prefetching algorithms: HIS/RIS-triggered, body part-related and historically-selected, and (2) a 'storage onion' design allocating various exam categories to layers with differential deletion protection. Hospitals planning for STS space should consider the needs of radiology, wards, outpatient clinics and clinicoradiological conferences for new and historical exams; desired on-line time; and potential increase in image throughput and changing resources, such as an increase in short term storage disk space.
Advanced integrated life support system update
NASA Technical Reports Server (NTRS)
Whitley, Phillip E.
1994-01-01
The Advanced Integrated Life Support System Program (AILSS) is an advanced development effort to integrate the life support and protection requirements using the U.S. Navy's fighter/attack mission as a starting point. The goal of AILSS is to optimally mate protection from altitude, acceleration, chemical/biological agent, thermal environment (hot, cold, and cold water immersion) stress as well as mission enhancement through improved restraint, night vision, and head-mounted reticules and displays to ensure mission capability. The primary emphasis to date has been to establish garment design requirements and tradeoffs for protection. Here the garment and the human interface are treated as a system. Twelve state-off-the-art concepts from government and industry were evaluated for design versus performance. On the basis of a combination of centrifuge, thermal manikin data, thermal modeling, and mobility studies, some key design parameters have been determined. Future efforts will concentrate on the integration of protection through garment design and the use of a single layer, multiple function concept to streamline the garment system.
Performance Analysis for the New g-2 Experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stratakis, Diktys; Convery, Mary; Crmkovic, J.
2016-06-01
The new g-2 experiment at Fermilab aims to measure the muon anomalous magnetic moment to a precision of ±0.14 ppm - a fourfold improvement over the 0.54 ppm precision obtained in the g-2 BNL E821experiment. Achieving this goal requires the delivery of highly polarized 3.094 GeV/c muons with a narrow ±0.5% Δp/p acceptance to the g-2 storage ring. In this study, we describe a muon capture and transport scheme that should meet this requirement. First, we present the conceptual design of our proposed scheme wherein we describe its basic features. Then, we detail its performance numerically by simulating the pionmore » production in the (g-2) production target, the muon collection by the downstream beamline optics as well as the beam polarization and spin-momentum correlation up to the storage ring. The sensitivity in performance of our proposed channel against key parameters such as magnet apertures and magnet positioning errors is analyzed« less
Actuators for a space manipulator
NASA Technical Reports Server (NTRS)
Chun, W.; Brunson, P.
1987-01-01
The robotic manipulator can be decomposed into distinct subsytems. One particular area of interest of mechanical subsystems is electromechanical actuators (or drives). A drive is defined as a motor with an appropriate transmission. An overview is given of existing, as well as state-of-the-art drive systems. The scope is limited to space applications. A design philosophy and adequate requirements are the initial steps in designing a space-qualified actuator. The focus is on the d-c motor in conjunction with several types of transmissions (harmonic, tendon, traction, and gear systems). The various transmissions will be evaluated and key performance parameters will be addressed in detail. Included in the assessment is a shuttle RMS joint and a MSFC drive of the Prototype Manipulator Arm. Compound joints are also investigated. Space imposes a set of requirements for designing a high-performance drive assembly. Its inaccessibility and cryogenic conditions warrant special considerations. Some guidelines concerning these conditions are present. The goal is to gain a better understanding in designing a space actuator.
V-Man Generation for 3-D Real Time Animation. Chapter 5
NASA Technical Reports Server (NTRS)
Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang
2007-01-01
The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.
In vivo correlation mapping microscopy
NASA Astrophysics Data System (ADS)
McGrath, James; Alexandrov, Sergey; Owens, Peter; Subhash, Hrebesh; Leahy, Martin
2016-04-01
To facilitate regular assessment of the microcirculation in vivo, noninvasive imaging techniques such as nailfold capillaroscopy are required in clinics. Recently, a correlation mapping technique has been applied to optical coherence tomography (OCT), which extends the capabilities of OCT to microcirculation morphology imaging. This technique, known as correlation mapping optical coherence tomography, has been shown to extract parameters, such as capillary density and vessel diameter, and key clinical markers associated with early changes in microvascular diseases. However, OCT has limited spatial resolution in both the transverse and depth directions. Here, we extend this correlation mapping technique to other microscopy modalities, including confocal microscopy, and take advantage of the higher spatial resolution offered by these modalities. The technique is achieved as a processing step on microscopy images and does not require any modification to the microscope hardware. Results are presented which show that this correlation mapping microscopy technique can extend the capabilities of conventional microscopy to enable mapping of vascular networks in vivo with high spatial resolution in both the transverse and depth directions.
NASA Technical Reports Server (NTRS)
Howard, David; Perry,Jay; Sargusingh, Miriam; Toomarian, Nikzad
2016-01-01
NASA's technology development roadmaps provide guidance to focus technological development on areas that enable crewed exploration missions beyond low-Earth orbit. Specifically, the technology area roadmap on human health, life support and habitation systems describes the need for life support system (LSS) technologies that can improve reliability and in-situ maintainability within a minimally-sized package while enabling a high degree of mission autonomy. To address the needs outlined by the guiding technology area roadmap, NASA's Advanced Exploration Systems (AES) Program has commissioned the Life Support Systems (LSS) Project to lead technology development in the areas of water recovery and management, atmosphere revitalization, and environmental monitoring. A notional exploration LSS architecture derived from the International Space has been developed and serves as the developmental basis for these efforts. Functional requirements and key performance parameters that guide the exploration LSS technology development efforts are presented and discussed. Areas where LSS flight operations aboard the ISS afford lessons learned that are relevant to exploration missions are highlighted.
Three-Dimensional Imaging of the Mouse Organ of Corti Cytoarchitecture for Mechanical Modeling
NASA Astrophysics Data System (ADS)
Puria, Sunil; Hartman, Byron; Kim, Jichul; Oghalai, John S.; Ricci, Anthony J.; Liberman, M. Charles
2011-11-01
Cochlear models typically use continuous anatomical descriptions and homogenized parameters based on two-dimensional images for describing the organ of Corti. To produce refined models based more closely on the actual cochlear cytoarchitecture, three-dimensional morphometric parameters of key mechanical structures are required. Towards this goal, we developed and compared three different imaging methods: (1) A fixed cochlear whole-mount preparation using the fluorescent dye Cellmask®, which is a molecule taken up by cell membranes and clearly delineates Deiters' cells, outer hair cells, and the phalangeal process, imaged using confocal microscopy; (2) An in situ fixed preparation with hair cells labeled using anti-prestin and supporting structures labeled using phalloidin, imaged using two-photon microscopy; and (3) A membrane-tomato (mT) mouse with fluorescent proteins expressed in all cell membranes, which enables two-photon imaging of an in situ live preparation with excellent visualization of the organ of Corti. Morphometric parameters including lengths, diameters, and angles, were extracted from 3D cellular surface reconstructions of the resulting images. Preliminary results indicate that the length of the phalangeal processes decreases from the first (inner most) to third (outer most) row of outer hair cells, and that their length also likely varies from base to apex and across species.
Horobin, R W; Stockert, J C; Rashid-Doubell, F
2015-05-01
We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.
Potential impacts of climate change on water quality in a shallow reservoir in China.
Zhang, Chen; Lai, Shiyu; Gao, Xueping; Xu, Liping
2015-10-01
To study the potential effects of climate change on water quality in a shallow reservoir in China, the field data analysis method is applied to data collected over a given monitoring period. Nine water quality parameters (water temperature, ammonia nitrogen, nitrate nitrogen, nitrite nitrogen, total nitrogen, total phosphorus, chemical oxygen demand, biochemical oxygen demand and dissolved oxygen) and three climate indicators for 20 years (1992-2011) are considered. The annual trends exhibit significant trends with respect to certain water quality and climate parameters. Five parameters exhibit significant seasonality differences in the monthly means between the two decades (1992-2001 and 2002-2011) of the monitoring period. Non-parametric regression of the statistical analyses is performed to explore potential key climate drivers of water quality in the reservoir. The results indicate that seasonal changes in temperature and rainfall may have positive impacts on water quality. However, an extremely cold spring and high wind speed are likely to affect the self-stabilising equilibrium states of the reservoir, which requires attention in the future. The results suggest that land use changes have important impact on nitrogen load. This study provides useful information regarding the potential effects of climate change on water quality in developing countries.
NASA Technical Reports Server (NTRS)
Sojka, Jan J.
2003-01-01
The Grant supported research addressing the question of how the NASA Solar Terrestrial Probes (STP) Mission called Geospace electrodynamics Connections (GEC) will resolve space-time structures as well as collect sufficient information to solve the coupled thermosphere-ionosphere- magnetosphere dynamics and electrodynamics. The approach adopted was to develop a high resolution in both space and time model of the ionosphere-thermosphere (I-T) over altitudes relevant to GEC, especially the deep-dipping phase. This I-T model was driven by a high- resolution model of magnetospheric-ionospheric (M-I) coupling electrodynamics. Such a model contains all the key parameters to be measured by GEC instrumentation, which in turn are the required parameters to resolve present-day problems in describing the energy and momentum coupling between the ionosphere-magnetosphere and ionosphere-thermosphere. This model database has been successfully created for one geophysical condition; winter, solar maximum with disturbed geophysical conditions, specifically a substorm. Using this data set, visualizations (movies) were created to contrast dynamics of the different measurable parameters. Specifically, the rapidly varying magnetospheric E and auroral electron precipitation versus the slower varying ionospheric F-region electron density, but rapidly responding E-region density.
Measuring viscosity with a resonant magnetic perturbation in the MST RFP
NASA Astrophysics Data System (ADS)
Fridström, Richard; Munaretto, Stefano; Frassinetti, Lorenzo; Chapman, Brett; Brunsell, Per; Sarff, John; MST Team
2016-10-01
Application of an m = 1 resonant magnetic perturbation (RMP) causes braking and locking of naturally rotating m = 1 tearing modes (TMs) in the MST RFP. The experimental TM dynamics are replicated by a theoretical model including the interaction between the RMP and multiple TMs [Fridström PoP 23, 062504 (2016)]. The viscosity is the only free parameter in the model, and it is chosen such that model TM velocity evolution matches that of the experiment. The model does not depend on the means by which the natural rotation is generated. The chosen value of the viscosity, about 40 m2/s, is consistent with separate measurements in MST using a biased probe to temporarily spin up the plasma. This viscosity is about 100 times larger than the classical prediction, likely due to magnetic stochasticity in the core of these plasmas. Viscosity is a key parameter in visco-resistive MHD codes like NIMROD. The validation of these codes requires measurement of the viscosity over a broad parameter range, which will now be possible with the RMP technique that, unlike the biased probe, is not limited to low-energy-density plasmas. Estimation with the RMP technique of the viscosity in several MST discharges suggests that the viscosity decreases as the electron beta increases. Work supported by USDOE.
Mohajer, Ardavan; Tremier, Anne; Barrington, Suzelle; Teglia, Cecile
2010-01-01
Composting is a feasible biological treatment for the recycling of wastewater sludge as a soil amendment. The process can be optimized by selecting an initial compost recipe with physical properties that enhance microbial activity. The present study measured the microbial O(2) uptake rate (OUR) in 16 sludge and wood residue mixtures to estimate the kinetics parameters of maximum growth rate mu(m) and rate of organic matter hydrolysis K(h), as well as the initial biodegradable organic matter fractions present. The starting mixtures consisted of a wide range of moisture content (MC), waste to bulking agent (BA) ratio (W/BA ratio) and BA particle size, which were placed in a laboratory respirometry apparatus to measure their OUR over 4 weeks. A microbial model based on the activated sludge process was used to calculate the kinetic parameters and was found to adequately reproduced OUR curves over time, except for the lag phase and peak OUR, which was not represented and generally over-estimated, respectively. The maximum growth rate mu(m), was found to have a quadratic relationship with MC and a negative association with BA particle size. As a result, increasing MC up to 50% and using a smaller BA particle size of 8-12 mm was seen to maximize mu(m). The rate of hydrolysis K(h) was found to have a linear association with both MC and BA particle size. The model also estimated the initial readily biodegradable organic matter fraction, MB(0), and the slower biodegradable matter requiring hydrolysis, MH(0). The sum of MB(0) and MH(0) was associated with MC, W/BA ratio and the interaction between these two parameters, suggesting that O(2) availability was a key factor in determining the value of these two fractions. The study reinforced the idea that optimization of the physical characteristics of a compost mixture requires a holistic approach. 2010 Elsevier Ltd. All rights reserved.
On Strong Positive Frequency Dependencies of Quality Factors in Local-Earthquake Seismic Studies
NASA Astrophysics Data System (ADS)
Morozov, Igor B.; Jhajhria, Atul; Deng, Wubing
2018-03-01
Many observations of seismic waves from local earthquakes are interpreted in terms of the frequency-dependent quality factor Q( f ) = Q0 f^{η } , where η is often close to or exceeds one. However, such steep positive frequency dependencies of Q require careful analysis with regard to their physical consistency. In particular, the case of η = 1 corresponds to frequency-independent (elastic) amplitude decays with time and consequently requires no Q-type attenuation mechanisms. For η > 1, several problems with physical meanings of such Q-factors occur. First, contrary to the key premise of seismic attenuation, high-frequency parts of the wavefield are enhanced with increasing propagation times relative to the low-frequency ones. Second, such attenuation cannot be implemented by mechanical models of wave-propagating media. Third, with η > 1, the velocity dispersion associated with such Q(f) occurs over unrealistically short frequency range and has an unexpected oscillatory shape. Cases η = 1 and η > 1 are usually attributed to scattering; however, this scattering must exhibit fortuitous tuning into the observation frequency band, which appears unlikely. The reason for the above problems is that the inferred Q values are affected by the conventional single-station measurement procedure. Both parameters Q 0 and are apparent, i.e., dependent on the selected parameterization and inversion method, and they should not be directly attributed to the subsurface. For η ≈ 1, parameter Q 0 actually describes the frequency-independent amplitude decay in access of some assumed geometric spreading t -α , where α is usually taken equal one. The case η > 1 is not allowed physically and could serve as an indicator of problematic interpretations. Although the case 0 < η < 1 is possible, its parameters Q 0 and may also be biased by the measurement procedure. To avoid such difficulties of Q-based approaches, we recommend measuring and interpreting the amplitude-decay rates (such as parameter α) directly.
Harnessing Orbital Debris to Sense the Space Environment
NASA Astrophysics Data System (ADS)
Mutschler, S.; Axelrad, P.; Matsuo, T.
A key requirement for accurate space situational awareness (SSA) is knowledge of the non-conservative forces that act on space objects. These effects vary temporally and spatially, driven by the dynamical behavior of space weather. Existing SSA algorithms adjust space weather models based on observations of calibration satellites. However, lack of sufficient data and mismodeling of non-conservative forces cause inaccuracies in space object motion prediction. The uncontrolled nature of debris makes it particularly sensitive to the variations in space weather. Our research takes advantage of this behavior by inverting observations of debris objects to infer the space environment parameters causing their motion. In addition, this research will produce more accurate predictions of the motion of debris objects. The hypothesis of this research is that it is possible to utilize a "cluster" of debris objects, objects within relatively close proximity of each other, to sense their local environment. We focus on deriving parameters of an atmospheric density model to more precisely predict the drag force on LEO objects. An Ensemble Kalman Filter (EnKF) is used for assimilation; the prior ensemble to the posterior ensemble is transformed during the measurement update in a manner that does not require inversion of large matrices. A prior ensemble is utilized to empirically determine the nonlinear relationship between measurements and density parameters. The filter estimates an extended state that includes position and velocity of the debris object, and atmospheric density parameters. The density is parameterized as a grid of values, distributed by latitude and local sidereal time over a spherical shell encompassing Earth. This research focuses on LEO object motion, but it can also be extended to additional orbital regimes for observation and refinement of magnetic field and solar radiation models. An observability analysis of the proposed approach is presented in terms of the measurement cadence necessary to estimate the local space environment.
Progress in Validation of Wind-US for Ramjet/Scramjet Combustion
NASA Technical Reports Server (NTRS)
Engblom, William A.; Frate, Franco C.; Nelson, Chris C.
2005-01-01
Validation of the Wind-US flow solver against two sets of experimental data involving high-speed combustion is attempted. First, the well-known Burrows- Kurkov supersonic hydrogen-air combustion test case is simulated, and the sensitively of ignition location and combustion performance to key parameters is explored. Second, a numerical model is developed for simulation of an X-43B candidate, full-scale, JP-7-fueled, internal flowpath operating in ramjet mode. Numerical results using an ethylene-air chemical kinetics model are directly compared against previously existing pressure-distribution data along the entire flowpath, obtained in direct-connect testing conducted at NASA Langley Research Center. Comparison to derived quantities such as burn efficiency and thermal throat location are also made. Reasonable to excellent agreement with experimental data is demonstrated for key parameters in both simulation efforts. Additional Wind-US feature needed to improve simulation efforts are described herein, including maintaining stagnation conditions at inflow boundaries for multi-species flow. An open issue regarding the sensitivity of isolator unstart to key model parameters is briefly discussed.
NASA Astrophysics Data System (ADS)
Huang, Yi-Chih; Lin, Yuh-Lang
2018-06-01
Essential parameters for making a looping track when a westward-moving tropical cyclone (TC) approaches a mesoscale mountain are investigated by examining several key nondimensional control parameters with a series of systematic, idealized numerical experiments, such as U/ Nh, V max/ Nh, U/ fL x , V max/ fR, h/ L x , and R/ L y . Here U is the uniform zonal wind velocity, N the Brunt-Vaisala frequency, h the mountain height, f the Coriolis parameter, V max the maximum tangential velocity at a radius of R from the cyclone center and L x is the halfwidth of the mountain in the east-west direction. It is found that looping tracks (a) tend to occur under small U/ Nh and U/ fL x , moderate h/ L x , and large V max/ Nh, which correspond to slow movement (leading to subgeostrophic flow associated with strong orographic blocking), moderate steepness, and strong tangential wind associated with TC vortex; (b) are often accompanied by an area of perturbation high pressure to the northeast of the mountain, which lasts for only a short period; and (c) do not require the existence of a northerly jet. The nondimensional control parameters are consolidated into a TC looping index (LI), {U2 R2 }/{V_{max 2 hLy }} , which is tested by several historical looping and non-looping typhoons approaching Taiwan's Central Mountain Range (CMR) from east or southeast. It is found that LI < 0.0125 may serve as a criterion for looping track to occur.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
NASA Astrophysics Data System (ADS)
Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun
2018-03-01
Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.
INMARSAT's personal communicator system
NASA Technical Reports Server (NTRS)
Hart, Nick; Haugli, HANS-C.; Poskett, Peter; Smith, K.
1993-01-01
Inmarsat has been providing near global mobile satellite communications since 1982 and Inmarsat terminals are currently being used in more than 130 countries. The terminals have been reduced in size and cost over the years and new technology has enabled the recent introduction of briefcase sized personal telephony terminals (Inmarsat-M). This trend continues and we are likely to see Inmarsat handheld terminals by the end of the decade. These terminals are called Inmarsat-P and this paper focuses on the various elements required to support a high quality service to handheld terminals. The main system elements are: the handheld terminals; the space segment with the associated orbits; and the gateways to terrestrial networks. It is both likely and desirable that personal handheld satellite communications will be offered by more than one system provider and this competition will ensure strong emphasis on service quality and cost of ownership. The handheld terminals also have to be attractive to a large number of potential users, and this means that the terminals must be small enough to fit in a pocket. Battery lifetime is another important consideration, and this coupled with radiation safety requirements limits the maximum radiated EIRP. The terminal G/T is mainly constrained by the gain of the omnidirectional antenna and the noise figure of the RF front end (including input losses). Inmarsat has examined, with the support of industry, a number of Geosynchronous (GSO), Medium Earth Orbit (MEO) and Low Earth Orbit (LEO) satellite options for the provision of a handheld mobile satellite service. This paper describes the key satellite and orbit parameters and tradeoffs which affect the overall quality of service and the space segment costing. The paper also stresses not only the importance of using and sharing the available mobile frequency band allocations efficiently, but also the key considerations affecting the choice of feeder link bands. The design of the gateways and the terrestrial network is critical to the overall viability of the service, and this paper also examines the key technical parameters associated with the Land Earth Stations (LES), which act as gateways into the Public Switched Telephone Network (PSTN). These not only include the design tradeoffs associated with the LES, but also the different terrestrial network interface options. The paper concludes with a brief description of the satellite propagation conditions associated with the use of handheld terminals. It describes how the handheld results in a number of propagation impairments which are not common to the previous measurements associated with vehicle mounted antennas. These measurements indicate that there is a complex tradeoff between link margin and the elevation angle to the satellite which has a significant impact on the space segment requirements and costing.
INMARSAT's personal communicator system
NASA Astrophysics Data System (ADS)
Hart, Nick; Haugli, Hans-C.; Poskett, Peter; Smith, K.
Inmarsat has been providing near global mobile satellite communications since 1982 and Inmarsat terminals are currently being used in more than 130 countries. The terminals have been reduced in size and cost over the years and new technology has enabled the recent introduction of briefcase sized personal telephony terminals (Inmarsat-M). This trend continues and we are likely to see Inmarsat handheld terminals by the end of the decade. These terminals are called Inmarsat-P and this paper focuses on the various elements required to support a high quality service to handheld terminals. The main system elements are: the handheld terminals; the space segment with the associated orbits; and the gateways to terrestrial networks. It is both likely and desirable that personal handheld satellite communications will be offered by more than one system provider and this competition will ensure strong emphasis on service quality and cost of ownership. The handheld terminals also have to be attractive to a large number of potential users, and this means that the terminals must be small enough to fit in a pocket. Battery lifetime is another important consideration, and this coupled with radiation safety requirements limits the maximum radiated EIRP. The terminal G/T is mainly constrained by the gain of the omnidirectional antenna and the noise figure of the RF front end (including input losses). Inmarsat has examined, with the support of industry, a number of Geosynchronous (GSO), Medium Earth Orbit (MEO) and Low Earth Orbit (LEO) satellite options for the provision of a handheld mobile satellite service. This paper describes the key satellite and orbit parameters and tradeoffs which affect the overall quality of service and the space segment costing. The paper also stresses not only the importance of using and sharing the available mobile frequency band allocations efficiently, but also the key considerations affecting the choice of feeder link bands. The design of the gateways and the terrestrial network is critical to the overall viability of the service, and this paper also examines the key technical parameters associated with the Land Earth Stations (LES), which act as gateways into the Public Switched Telephone Network (PSTN). These not only include the design tradeoffs associated with the LES, but also the different terrestrial network interface options. The paper concludes with a brief description of the satellite propagation conditions associated with the use of handheld terminals. It describes how the handheld results in a number of propagation impairments which are not common to the previous measurements associated with vehicle mounted antennas. These measurements indicate that there is a complex tradeoff between link margin and the elevation angle to the satellite which has a significant impact on the space segment requirements and costing.
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Lü, Jinhu
2017-06-01
In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.
Constant False Alarm Rate (CFAR) Autotrend Evaluation Report
2011-12-01
represent a level of uncertainty in the performance analysis. The performance analysis produced the following Key Performance Indicators ( KPIs ) as...Identity KPI Key Performance Indicator MooN M-out-of-N MSPU Modernized Signal Processor Unit NFF No Fault Found PAT Parameter Allocation Table PD
Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-07-15
In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper's capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determinedmore » for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency ({eta} {approx} 20%) and dark count probability (p{sub dark} {approx} 10{sup -7})« less
Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol
NASA Astrophysics Data System (ADS)
Molotkov, S. N.
2008-07-01
In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper’s capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determined for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency (η ≈ 20%) and dark count probability ( p dark ˜ 10-7).
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Psychoacoustical evaluation of natural and urban sounds in soundscapes.
Yang, Ming; Kang, Jian
2013-07-01
Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle
NASA Technical Reports Server (NTRS)
Ali, Yasmin; Chuhta, Jesse D.; Hughes, Michael P.; Radke, Tara S.
2015-01-01
Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics models used to verify no re-contact. The NASA Orion Multi-Purpose Crew Vehicle (MPCV) architecture includes a highly-integrated Forward Bay Cover (FBC) jettison assembly design that combines parachutes and piston thrusters to separate the FBC from the Crew Module (CM) and avoid re-contact. A multi-disciplinary team across numerous organizations examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the FBC separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute elements, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1. Additional testing will be required to support human certification of this separation event, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust human-rated FBC separation event.
Understanding the Yellowstone magmatic system using 3D geodynamic inverse models
NASA Astrophysics Data System (ADS)
Kaus, B. J. P.; Reuber, G. S.; Popov, A.; Baumann, T.
2017-12-01
The Yellowstone magmatic system is one of the largest magmatic systems on Earth. Recent seismic tomography suggest that two distinct magma chambers exist: a shallow, presumably felsic chamber and a deeper much larger, partially molten, chamber above the Moho. Why melt stalls at different depth levels above the Yellowstone plume, whereas dikes cross-cut the whole lithosphere in the nearby Snake River Plane is unclear. Partly this is caused by our incomplete understanding of lithospheric scale melt ascent processes from the upper mantle to the shallow crust, which requires better constraints on the mechanics and material properties of the lithosphere.Here, we employ lithospheric-scale 2D and 3D geodynamic models adapted to Yellowstone to better understand magmatic processes in active arcs. The models have a number of (uncertain) input parameters such as the temperature and viscosity structure of the lithosphere, geometry and melt fraction of the magmatic system, while the melt content and rock densities are obtained by consistent thermodynamic modelling of whole rock data of the Yellowstone stratigraphy. As all of these parameters affect the dynamics of the lithosphere, we use the simulations to derive testable model predictions such as gravity anomalies, surface deformation rates and lithospheric stresses and compare them with observations. We incorporated it within an inversion method and perform 3D geodynamic inverse models of the Yellowstone magmatic system. An adjoint based method is used to derive the key model parameters and the factors that affect the stress field around the Yellowstone plume, locations of enhanced diking and melt accumulations. Results suggest that the plume and the magma chambers are connected with each other and that magma chamber overpressure is required to explain the surface displacement in phases of high activity above the Yellowstone magmatic system.
Xu, Chuang; Xu, Qiushi; Chen, Yuanyuan; Yang, Wei; Xia, Cheng; Yu, Hongjiang; Zhu, Kuilin; Shen, Taiyu; Zhang, Ziyang
2015-10-24
Negative energy balance (NEB) is a common pathological foundation of ketosis and fatty liver. Liver and fat tissue are the major organs of lipid metabolism and take part in modulating lipid oxidative capacity and energy demands, which is also a key metabolic pathway that regulates NEB develop during perinatal period. Fibroblast growth factor-21 (FGF-21) is a recently discovered protein hormone that plays an important and specific regulating role in adipose lipid metabolism and liver gluconeogenesis for human and mouse. Our aim is to investigate the variation and relationship between serum FGF-21 concentration and characteristic parameters related to negative energy balance in different energy metabolism state. In this research, five non-pregnant, non-lactating Holstein-Friesian dairy cows were randomly allocated into two groups. The interventions were a controlled-energy diet (30% of maintenance energy requirements) and a moderate-energy diet (120% of predicted energy requirements) that lasted for the duration of the experiment. We measured biochemical parameters, serum FGF-21, leptin and insulin levels by commercial ELISA kits. The results showed that serum FGF-21 levels were significantly higher in both groups treated with a controlled-energy diet, while FGF-21 levels in both groups treated with moderate-energy diet were low. FGF-21 levels exhibited a significant positive correlation with serum leptin levels, while an inverse relationship was found between FGF-21 and blood glucose and β-hydroxybutyrate acid (BHBA) levels. An increase in FGF-21 levels after a controlled-energy diet treatment may contribute to a positive metabolic effect which could result in a new theoretical and practical basis for the preventive strategy of dairy cows with NEB.
NASA Astrophysics Data System (ADS)
Krol, Q. E.; Loewe, H.
2016-12-01
Grain shape is known to influence the effective physical properties of snow and therefore included in the international classification of seasonal snow. Accordingly, snowpack models account for phenomenological shape parameters (sphericity, dendricity) to capture shape variations. These parameters are however difficult to validate due to the lack of clear-cut definitions from the 3D microstucture and insufficient links to physical properties. While the definition of traditional shape was tailored to the requirements of observers, a more objective definition should be tailored to the requirements of physical properties, by analyzing geometrical (shape) corrections in existing theoretical formulations directly. To this end we revisited the autocorrelation function (ACF) and the chord length distribution (CLD) of snow. Both functions capture size distributions of the microstructure, can be calculated from X-ray tomography and are related to various physical properties. Both functions involve the optical equivalent diameter as dominant quantity, however the respective higher-order geometrical correction differ. We have analyzed these corrections, namely interfacial curvatures for the ACF and the second moment for the CLD, using an existing data set of 165 tomography samples. To unify the notion of shape, we derived various statistical relations between the length scales. Our analysis bears three key practical implications. First, we derived a significantly improved relation between the exponential correlation length and the optical diameter by taking curvatures into account. This adds to the understanding of linking "microwave grain size" and "optical grain size" of snow for remote sensing. Second, we retrieve the optical shape parameter (commonly referred to as B) from tomography images via the moment of the CLD. Third, shape variations seen by observers do not necessarily correspond to shape variations probed by physical properties.
Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model
NASA Astrophysics Data System (ADS)
Prakash, Shashi; Kumar, Nitish; Kumar, Subrata
2016-09-01
CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.
Metrology of deep trench etched memory structures using 3D scatterometry
NASA Astrophysics Data System (ADS)
Reinig, Peter; Dost, Rene; Moert, Manfred; Hingst, Thomas; Mantz, Ulrich; Moffitt, Jasen; Shakya, Sushil; Raymond, Christopher J.; Littau, Mike
2005-05-01
Scatterometry is receiving considerable attention as an emerging optical metrology in the silicon industry. One area of progress in deploying these powerful measurements in process control is performing measurements on real device structures, as opposed to limiting scatterometry measurements to periodic structures, such as line-space gratings, placed in the wafer scribe. In this work we will discuss applications of 3D scatterometry to the measurement of advanced trench memory devices. This is a challenging and complex scatterometry application that requires exceptionally high-performance computational abilities. In order to represent the physical device, the relatively tall structures require a high number of slices in the rigorous coupled wave analysis (RCWA) theoretical model. This is complicated further by the presence of an amorphous silicon hard mask on the surface, which is highly sensitive to reflectance scattering and therefore needs to be modeled in detail. The overall structure is comprised of several layers, with the trenches presenting a complex bow-shape sidewall that must be measured. Finally, the double periodicity in the structures demands significantly greater computational capabilities. Our results demonstrate that angular scatterometry is sensitive to the key parameters of interest. The influence of further model parameters and parameter cross correlations have to be carefully taken into account. Profile results obtained by non-library optimization methods compare favorably with cross-section SEM images. Generating a model library suitable for process control, which is preferred for precision, presents numerical throughput challenges. Details will be discussed regarding library generation approaches and strategies for reducing the numerical overhead. Scatterometry and SEM results will be compared, leading to conclusions about the feasibility of this advanced application.
A QUICK KEY TO THE SUBFAMILIES AND GENERA OF ANTS OF THE SAVANNAH RIVER SITE, AIKEN, SC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, D
2006-10-04
This taxonomic key was devised to support development of a Rapid Bioassessment Protocol using ants at the Savannah River Site. The emphasis is on ''rapid'' and, because the available keys contained a large number of genera not known to occur at the Savannah River Site, we found that the available keys were unwieldy. Because these keys contained more genera than we would likely encounter and because this larger number of genera required both more couplets in the key and often required examination of characters that are difficult to assess without higher magnifications (60X or higher) more time was required tomore » process samples. In developing this set of keys I recognize that the character sets used may lead to some errors but I believe that the error rate will be small and, for the purpose of rapid bioassessment, this error rate will be acceptable provided that overall sample sizes are adequate. Oliver and Beattie (1996a, 1996b) found that for rapid assessment of biodiversity the same results were found when identifications were done to morphospecies by people with minimal expertise as when the same data sets were identified by subject matter experts. Basset et al. (2004) concluded that it was not as important to correctly identify all species as it was to be sure that the study included as many functional groups as possible. If your study requires high levels of accuracy, it is highly recommended that when you key out a specimen and have any doubts concerning the identification, you should refer to keys in Bolton (1994) or to the other keys used to develop this area specific taxonomic key.« less