Reason, emotion and decision-making: risk and reward computation with feeling.
Quartz, Steven R
2009-05-01
Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.
Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian
2018-05-01
Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process
NASA Astrophysics Data System (ADS)
Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.
2016-12-01
Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.
Flight Operations Analysis Tool
NASA Technical Reports Server (NTRS)
Easter, Robert; Herrell, Linda; Pomphrey, Richard; Chase, James; Wertz Chen, Julie; Smith, Jeffrey; Carter, Rebecca
2006-01-01
Flight Operations Analysis Tool (FLOAT) is a computer program that partly automates the process of assessing the benefits of planning spacecraft missions to incorporate various combinations of launch vehicles and payloads. Designed primarily for use by an experienced systems engineer, FLOAT makes it possible to perform a preliminary analysis of trade-offs and costs of a proposed mission in days, whereas previously, such an analysis typically lasted months. FLOAT surveys a variety of prior missions by querying data from authoritative NASA sources pertaining to 20 to 30 mission and interface parameters that define space missions. FLOAT provides automated, flexible means for comparing the parameters to determine compatibility or the lack thereof among payloads, spacecraft, and launch vehicles, and for displaying the results of such comparisons. Sparseness, typical of the data available for analysis, does not confound this software. FLOAT effects an iterative process that identifies modifications of parameters that could render compatible an otherwise incompatible mission set.
Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming
2015-01-01
High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-09-02
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.
Real-time parameter optimization based on neural network for smart injection molding
NASA Astrophysics Data System (ADS)
Lee, H.; Liau, Y.; Ryu, K.
2018-03-01
The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.
COST ESTIMATION MODELS FOR DRINKING WATER TREATMENT UNIT PROCESSES
Cost models for unit processes typically utilized in a conventional water treatment plant and in package treatment plant technology are compiled in this paper. The cost curves are represented as a function of specified design parameters and are categorized into four major catego...
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-01-01
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
USDA-ARS?s Scientific Manuscript database
Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the l...
Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection
NASA Astrophysics Data System (ADS)
Brasche, L. J. H.; Lopez, R.; Eisenmann, D.
2006-03-01
Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
Ripley, Edward B.; Hallman, Russell L.
2015-11-10
Disclosed are methods and systems for controlling of the microstructures of a soldered, brazed, welded, plated, cast, or vapor deposited manufactured component. The systems typically use relatively weak magnetic fields of either constant or varying flux to affect material properties within a manufactured component, typically without modifying the alloy, or changing the chemical composition of materials or altering the time, temperature, or transformation parameters of a manufacturing process. Such systems and processes may be used with components consisting of only materials that are conventionally characterized as be uninfluenced by magnetic forces.
An Introduction to Data Analysis in Asteroseismology
NASA Astrophysics Data System (ADS)
Campante, Tiago L.
A practical guide is presented to some of the main data analysis concepts and techniques employed contemporarily in the asteroseismic study of stars exhibiting solar-like oscillations. The subjects of digital signal processing and spectral analysis are introduced first. These concern the acquisition of continuous physical signals to be subsequently digitally analyzed. A number of specific concepts and techniques relevant to asteroseismology are then presented as we follow the typical workflow of the data analysis process, namely, the extraction of global asteroseismic parameters and individual mode parameters (also known as peak-bagging) from the oscillation spectrum.
Camera calibration based on the back projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui
2015-12-01
Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.
A Rotating Plug Model of Friction Stir Welding Heat Transfer
NASA Technical Reports Server (NTRS)
Raghulapadu J. K.; Peddieson, J.; Buchanan, G. R.; Nunes, A. C.
2006-01-01
A simplified rotating plug model is employed to study the heat transfer phenomena associated with the fiction stir welding process. An approximate analytical solution is obtained based on this idealized model and used both to demonstrate the qualitative influence of process parameters on predictions and to estimate temperatures produced in typical fiction stir welding situations.
Optimizing coagulation-adsorption for haloform and TOC (Total Organic Carbon) reduction
NASA Astrophysics Data System (ADS)
Semmens, M. J.; Hohenstein, G.; Staples, A.; Norgaard, G.; Ayers, K.; Tyson, M. P.
1983-05-01
The removal of organic matter from Mississippi River water by coagulation and softening processes and the influence of operating parameters upon the removal process are examined. Furthermore, since activated carbon is typically employed to reduce organic concentrations, the effectiveness of various pretreatments are evaluated for their impact upon carbon bed life and the product water quality.
Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram
2018-06-08
Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.
Jing, Nan; Li, Chuang; Chong, Yaqin
2017-01-20
An estimation method for indirectly observable parameters for a typical low dynamic vehicle (LDV) is presented. The estimation method utilizes apparent magnitude, azimuth angle, and elevation angle to estimate the position and velocity of a typical LDV, such as a high altitude balloon (HAB). In order to validate the accuracy of the estimated parameters gained from an unscented Kalman filter, two sets of experiments are carried out to obtain the nonresolved photometric and astrometric data. In the experiments, a HAB launch is planned; models of the HAB dynamics and kinematics and observation models are built to use as time update and measurement update functions, respectively. When the HAB is launched, a ground-based optoelectronic detector is used to capture the object images, which are processed using aperture photometry technology to obtain the time-varying apparent magnitude of the HAB. Two sets of actual and estimated parameters are given to clearly indicate the parameter differences. Two sets of errors between the actual and estimated parameters are also given to show how the estimated position and velocity differ with respect to the observation time. The similar distribution curve results from the two scenarios, which agree within 3σ, verify that nonresolved photometric and astrometric data can be used to estimate the indirectly observable state parameters (position and velocity) for a typical LDV. This technique can be applied to small and dim space objects in the future.
Wang, Xiaohua; Li, Xi; Rong, Mingzhe; Xie, Dingli; Ding, Dan; Wang, Zhixiang
2017-01-01
The ultra-high frequency (UHF) method is widely used in insulation condition assessment. However, UHF signal processing algorithms are complicated and the size of the result is large, which hinders extracting features and recognizing partial discharge (PD) patterns. This article investigated the chromatic methodology that is novel in PD detection. The principle of chromatic methodologies in color science are introduced. The chromatic processing represents UHF signals sparsely. The UHF signals obtained from PD experiments were processed using chromatic methodology and characterized by three parameters in chromatic space (H, L, and S representing dominant wavelength, signal strength, and saturation, respectively). The features of the UHF signals were studied hierarchically. The results showed that the chromatic parameters were consistent with conventional frequency domain parameters. The global chromatic parameters can be used to distinguish UHF signals acquired by different sensors, and they reveal the propagation properties of the UHF signal in the L-shaped gas-insulated switchgear (GIS). Finally, typical PD defect patterns had been recognized by using novel chromatic parameters in an actual GIS tank and good performance of recognition was achieved. PMID:28106806
Wang, Xiaohua; Li, Xi; Rong, Mingzhe; Xie, Dingli; Ding, Dan; Wang, Zhixiang
2017-01-18
The ultra-high frequency (UHF) method is widely used in insulation condition assessment. However, UHF signal processing algorithms are complicated and the size of the result is large, which hinders extracting features and recognizing partial discharge (PD) patterns. This article investigated the chromatic methodology that is novel in PD detection. The principle of chromatic methodologies in color science are introduced. The chromatic processing represents UHF signals sparsely. The UHF signals obtained from PD experiments were processed using chromatic methodology and characterized by three parameters in chromatic space ( H , L , and S representing dominant wavelength, signal strength, and saturation, respectively). The features of the UHF signals were studied hierarchically. The results showed that the chromatic parameters were consistent with conventional frequency domain parameters. The global chromatic parameters can be used to distinguish UHF signals acquired by different sensors, and they reveal the propagation properties of the UHF signal in the L-shaped gas-insulated switchgear (GIS). Finally, typical PD defect patterns had been recognized by using novel chromatic parameters in an actual GIS tank and good performance of recognition was achieved.
Landerl, Karin
2013-01-01
Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310
NASA Technical Reports Server (NTRS)
Koda, M.; Seinfeld, J. H.
1982-01-01
The reconstruction of a concentration distribution from spatially averaged and noise-corrupted data is a central problem in processing atmospheric remote sensing data. Distributed parameter observer theory is used to develop reconstructibility conditions for distributed parameter systems having measurements typical of those in remote sensing. The relation of the reconstructibility condition to the stability of the distributed parameter observer is demonstrated. The theory is applied to a variety of remote sensing situations, and it is found that those in which concentrations are measured as a function of altitude satisfy the conditions of distributed state reconstructibility.
On the use of tower-flux measurements to assess the performance of global ecosystem models
NASA Astrophysics Data System (ADS)
El Maayar, M.; Kucharik, C.
2003-04-01
Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.
Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...
ERIC Educational Resources Information Center
Engel-Yeger, Batya
2010-01-01
The objective of this study was to examine the applicability of the short sensory profile (SSP) for screening sensory processing disorders (SPDs) among typical children in Israel, and to evaluate the relationship between SPDs and socio-demographic parameters. Participants were 395 Israeli children, aged 3 years to 10 years 11 months, with typical…
NASA Astrophysics Data System (ADS)
Oberberg, Moritz; Styrnoll, Tim; Ries, Stefan; Bienholz, Stefan; Awakowicz, Peter
2015-09-01
Reactive sputter processes are used for the deposition of hard, wear-resistant and non-corrosive ceramic layers such as aluminum oxide (Al2O3) . A well known problem is target poisoning at high reactive gas flows, which results from the reaction of the reactive gas with the metal target. Consequently, the sputter rate decreases and secondary electron emission increases. Both parameters show a non-linear hysteresis behavior as a function of the reactive gas flow and this leads to process instabilities. This work presents a new control method of Al2O3 deposition in a multiple frequency CCP (MFCCP) based on plasma parameters. Until today, process controls use parameters such as spectral line intensities of sputtered metal as an indicator for the sputter rate. A coupling between plasma and substrate is not considered. The control system in this work uses a new plasma diagnostic method: The multipole resonance probe (MRP) measures plasma parameters such as electron density by analyzing a typical resonance frequency of the system response. This concept combines target processes and plasma effects and directly controls the sputter source instead of the resulting target parameters.
Application of lab derived kinetic biodegradation parameters at the field scale
NASA Astrophysics Data System (ADS)
Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.
2003-04-01
Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.
Recent developments in membrane-based separations in biotechnology processes: review.
Rathore, A S; Shirke, A
2011-01-01
Membrane-based separations are the most ubiquitous unit operations in biotech processes. There are several key reasons for this. First, they can be used with a large variety of applications including clarification, concentration, buffer exchange, purification, and sterilization. Second, they are available in a variety of formats, such as depth filtration, ultrafiltration, diafiltration, nanofiltration, reverse osmosis, and microfiltration. Third, they are simple to operate and are generally robust toward normal variations in feed material and operating parameters. Fourth, membrane-based separations typically require lower capital cost when compared to other processing options. As a result of these advantages, a typical biotech process has anywhere from 10 to 20 membrane-based separation steps. In this article we review the major developments that have occurred on this topic with a focus on developments in the last 5 years.
Influence of Process Parameters on the Process Efficiency in Laser Metal Deposition Welding
NASA Astrophysics Data System (ADS)
Güpner, Michael; Patschger, Andreas; Bliedtner, Jens
Conventionally manufactured tools are often completely constructed of a high-alloyed, expensive tool steel. An alternative way to manufacture tools is the combination of a cost-efficient, mild steel and a functional coating in the interaction zone of the tool. Thermal processing methods, like laser metal deposition, are always characterized by thermal distortion. The resistance against the thermal distortion decreases with the reduction of the material thickness. As a consequence, there is a necessity of a special process management for the laser based coating of thin parts or tools. The experimental approach in the present paper is to keep the energy and the mass per unit length constant by varying the laser power, the feed rate and the powder mass flow. The typical seam parameters are measured in order to characterize the cladding process, define process limits and evaluate the process efficiency. Ways to optimize dilution, angular distortion and clad height are presented.
Radiation dosimetry for quality control of food preservation and disinfestation
NASA Astrophysics Data System (ADS)
McLaughlin, W. L.; Miller, A.; Uribe, R. M.
In the use of x and gamma rays and scanned electron beams to extend the shelf life of food by delay of sprouting and ripening, killing of microbes, and control of insect population, quality assurance is provided by standardized radiation dosimetry. By strategic placement of calibrated dosimeters that are sufficiently stable and reproducible, it is possible to monitor minimum and maximum radiation absorbed dose levels and dose uniformity for a given processed foodstuff. The dosimetry procedure is especially important in the commisioning of a process and in making adjustments of process parameters (e.g. conveyor speed) to meet changes that occur in product and source parameters (e.g. bulk density and radiation spectrum). Routine dosimetry methods and certain corrections of dosimetry data may be selected for the radiations used in typical food processes.
Grinding, Machining Morphological Studies on C/SiC Composites
NASA Astrophysics Data System (ADS)
Xiao, Chun-fang; Han, Bing
2018-05-01
C/SiC composite is a typical material difficult to machine. It is hard and brittle. In machining, the cutting force is large, the material removal rate is low, the edge is prone to collapse, and the tool wear is serious. In this paper, the grinding of C/Si composites material along the direction of fiber distribution is studied respectively. The surface microstructure and mechanical properties of C/SiC composites processed by ultrasonic machining were evaluated. The change of surface quality with the change of processing parameters has also been studied. By comparing the performances of conventional grinding and ultrasonic grinding, the surface roughness and functional characteristics of the material can be improved by optimizing the processing parameters.
Determination of optimal tool parameters for hot mandrel bending of pipe elbows
NASA Astrophysics Data System (ADS)
Tabakajew, Dmitri; Homberg, Werner
2018-05-01
Seamless pipe elbows are important components in mechanical, plant and apparatus engineering. Typically, they are produced by the so-called `Hamburg process'. In this hot forming process, the initial pipes are subsequently pushed over an ox-horn-shaped bending mandrel. The geometric shape of the mandrel influences the diameter, bending radius and wall thickness distribution of the pipe elbow. This paper presents the numerical simulation model of the hot mandrel bending process created to ensure that the optimum mandrel geometry can be determined at an early stage. A fundamental analysis was conducted to determine the influence of significant parameters on the pipe elbow quality. The chosen methods and approach as well as the corresponding results are described in this paper.
High Performance Input/Output for Parallel Computer Systems
NASA Technical Reports Server (NTRS)
Ligon, W. B.
1996-01-01
The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.
Effects of process parameters on the molding quality of the micro-needle array
NASA Astrophysics Data System (ADS)
Qiu, Z. J.; Ma, Z.; Gao, S.
2016-07-01
Micro-needle array, which is used in medical applications, is a kind of typical injection molded products with microstructures. Due to its tiny micro-features size and high aspect ratios, it is more likely to produce short shots defects, leading to poor molding quality. The injection molding process of the micro-needle array was studied in this paper to find the effects of the process parameters on the molding quality of the micro-needle array and to provide theoretical guidance for practical production of high-quality products. With the shrinkage ratio and warpage of micro needles as the evaluation indices of the molding quality, the orthogonal experiment was conducted and the analysis of variance was carried out. According to the results, the contribution rates were calculated to determine the influence of various process parameters on molding quality. The single parameter method was used to analyse the main process parameter. It was found that the contribution rate of the holding pressure on shrinkage ratio and warpage reached 83.55% and 94.71% respectively, far higher than that of the other parameters. The study revealed that the holding pressure is the main factor which affects the molding quality of micro-needle array so that it should be focused on in order to obtain plastic parts with high quality in the practical production.
Silicon Solar Cell Process Development, Fabrication and Analysis, Phase 1
NASA Technical Reports Server (NTRS)
Yoo, H. I.; Iles, P. A.; Tanner, D. P.
1979-01-01
Solar cells from RTR ribbons, EFG (RF and RH) ribbons, dendritic webs, Silso wafers, cast silicon by HEM, silicon on ceramic, and continuous Czochralski ingots were fabricated using a standard process typical of those used currently in the silicon solar cell industry. Back surface field (BSF) processing and other process modifications were included to give preliminary indications of possible improved performance. The parameters measured included open circuit voltage, short circuit current, curve fill factor, and conversion efficiency (all taken under AM0 illumination). Also measured for typical cells were spectral response, dark I-V characteristics, minority carrier diffusion length, and photoresponse by fine light spot scanning. the results were compared to the properties of cells made from conventional single crystalline Czochralski silicon with an emphasis on statistical evaluation. Limited efforts were made to identify growth defects which will influence solar cell performance.
Continuous welding of unidirectional fiber reinforced thermoplastic tape material
NASA Astrophysics Data System (ADS)
Schledjewski, Ralf
2017-10-01
Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.
NASA Astrophysics Data System (ADS)
Ahuja, Bhrigu; Karg, Michael; Nagulin, Konstantin Yu.; Schmidt, Michael
The proposed paper illustrates fabrication and characterization of high strength Aluminium Copper alloys processed using Laser Beam Melting process. Al-Cu alloys EN AW-2219 and EN AW-2618 are classified as wrought alloys and 2618 is typically considered difficult to weld. Laser Beam Melting (LBM) process from the family of Additive Manufacturing processes, has the unique ability to form fully dense complex 3D geometries using micro sized metallic powder in a layer by layer fabrication methodology. LBM process can most closely be associated to the conventional laser welding process, but has significant differences in terms of the typical laser intensities and scan speeds used. Due to the use of high intensities and fast scan speeds, the process induces extremely high heating and cooling rates. This property gives it a unique physical attribute and therefore its ability to process high strength Al-Cu alloys needs to be investigated. Experiments conducted during the investigations associate the induced energy density controlled by varying process parameters to the achieved relative densities of the fabricated 3D structures.
Universal Responses of Cyclic-Oxidation Models Studied
NASA Technical Reports Server (NTRS)
Smialek, James L.
2003-01-01
Oxidation is an important degradation process for materials operating in the high-temperature air or oxygen environments typical of jet turbine or rocket engines. Reaction of the combustion gases with the component material forms surface layer scales during these oxidative exposures. Typically, the instantaneous rate of reaction is inversely proportional to the existing scale thickness, giving rise to parabolic kinetics. However, more realistic applications entail periodic startup and shutdown. Some scale spallation may occur upon cooling, resulting in loss of the protective diffusion barrier provided by a fully intact scale. Upon reheating, the component will experience accelerated oxidation due to this spallation. Cyclic-oxidation testing has, therefore, been a mainstay of characterization and performance ranking for high-temperature materials. Models simulate this process by calculating how a scale spalls upon cooling and regrows upon heating (refs. 1 to 3). Recently released NASA software (COSP for Windows) allows researchers to specify a uniform layer or discrete segments of spallation (ref. 4). Families of model curves exhibit consistent regularity and trends with input parameters, and characteristic features have been empirically described in terms of these parameters. Although much insight has been gained from experimental and model curves, no equation has been derived that can describe this behavior explicitly as functions of the key oxidation parameters.
ERIC Educational Resources Information Center
Filer, Herb; Windram, Kendall
Three types of vacuum filters and their operation are described in this lesson. Typical filter cycle, filter components and their functions, process control parameters, expected performance, and safety/historical aspects are considered. Conditioning methods are also described, although it is suggested that lessons on sludge characteristics, sludge…
Tavano, Alessandro; Pesarin, Anna; Murino, Vittorio; Cristani, Marco
2014-01-01
Individuals with Asperger syndrome/High Functioning Autism fail to spontaneously attribute mental states to the self and others, a life-long phenotypic characteristic known as mindblindness. We hypothesized that mindblindness would affect the dynamics of conversational interaction. Using generative models, in particular Gaussian mixture models and observed influence models, conversations were coded as interacting Markov processes, operating on novel speech/silence patterns, termed Steady Conversational Periods (SCPs). SCPs assume that whenever an agent's process changes state (e.g., from silence to speech), it causes a general transition of the entire conversational process, forcing inter-actant synchronization. SCPs fed into observed influence models, which captured the conversational dynamics of children and adolescents with Asperger syndrome/High Functioning Autism, and age-matched typically developing participants. Analyzing the parameters of the models by means of discriminative classifiers, the dialogs of patients were successfully distinguished from those of control participants. We conclude that meaning-free speech/silence sequences, reflecting inter-actant synchronization, at least partially encode typical and atypical conversational dynamics. This suggests a direct influence of theory of mind abilities onto basic speech initiative behavior.
Yen, Haw; Bailey, Ryan T; Arabi, Mazdak; Ahmadi, Mehdi; White, Michael J; Arnold, Jeffrey G
2014-09-01
Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the large number of parameters at the disposal of these models, circumstances may arise in which excellent global results are achieved using inaccurate magnitudes of these "intra-watershed" responses. When used for scenario analysis, a given model hence may inaccurately predict the global, in-stream effect of implementing land-use practices at the interior of the watershed. In this study, data regarding internal watershed behavior are used to constrain parameter estimation to maintain realistic intra-watershed responses while also matching available in-stream monitoring data. The methodology is demonstrated for the Eagle Creek Watershed in central Indiana. Streamflow and nitrate (NO) loading are used as global in-stream comparisons, with two process responses, the annual mass of denitrification and the ratio of NO losses from subsurface and surface flow, used to constrain parameter estimation. Results show that imposing these constraints not only yields realistic internal watershed behavior but also provides good in-stream comparisons. Results further demonstrate that in the absence of incorporating intra-watershed constraints, evaluation of nutrient abatement strategies could be misleading, even though typical performance criteria are satisfied. Incorporating intra-watershed responses yields a watershed model that more accurately represents the observed behavior of the system and hence a tool that can be used with confidence in scenario evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Cost analysis of composite fan blade manufacturing processes
NASA Technical Reports Server (NTRS)
Stelson, T. S.; Barth, C. F.
1980-01-01
The relative manufacturing costs were estimated for large high technology fan blades prepared by advanced composite fabrication methods using seven candidate materials/process systems. These systems were identified as laminated resin matrix composite, filament wound resin matrix composite, superhybrid solid laminate, superhybrid spar/shell, metal matrix composite, metal matrix composite with a spar and shell, and hollow titanium. The costs were calculated utilizing analytical process models and all cost data are presented as normalized relative values where 100 was the cost of a conventionally forged solid titanium fan blade whose geometry corresponded to a size typical of 42 blades per disc. Four costs were calculated for each of the seven candidate systems to relate the variation of cost on blade size. Geometries typical of blade designs at 24, 30, 36 and 42 blades per disc were used. The impact of individual process yield factors on costs was also assessed as well as effects of process parameters, raw materials, labor rates and consumable items.
Application of an enhanced discrete element method to oil and gas drilling processes
NASA Astrophysics Data System (ADS)
Ubach, Pere Andreu; Arrufat, Ferran; Ring, Lev; Gandikota, Raju; Zárate, Francisco; Oñate, Eugenio
2016-03-01
The authors present results on the use of the discrete element method (DEM) for the simulation of drilling processes typical in the oil and gas exploration industry. The numerical method uses advanced DEM techniques using a local definition of the DEM parameters and combined FEM-DEM procedures. This paper presents a step-by-step procedure to build a DEM model for analysis of the soil region coupled to a FEM model for discretizing the drilling tool that reproduces the drilling mechanics of a particular drill bit. A parametric study has been performed to determine the model parameters in order to maintain accurate solutions with reduced computational cost.
A modified dynamical model of drying process of polymer blend solution coated on a flat substrate
NASA Astrophysics Data System (ADS)
Kagami, Hiroyuki
2008-05-01
We have proposed and modified a model of drying process of polymer solution coated on a flat substrate for flat polymer film fabrication. And for example numerical simulation of the model reproduces a typical thickness profile of the polymer film formed after drying. Then we have clarified dependence of distribution of polymer molecules on a flat substrate on a various parameters based on analysis of numerical simulations. Then we drove nonlinear equations of drying process from the dynamical model and the fruits were reported. The subject of above studies was limited to solution having one kind of solute though the model could essentially deal with solution having some kinds of solutes. But nowadays discussion of drying process of a solution having some kinds of solutes is needed because drying process of solution having some kinds of solutes appears in many industrial scenes. Polymer blend solution is one instance. And typical resist consists of a few kinds of polymers. Then we introduced a dynamical model of drying process of polymer blend solution coated on a flat substrate and results of numerical simulations of the dynamical model. But above model was the simplest one. In this study, we modify above dynamical model of drying process of polymer blend solution adding effects that some parameters change with time as functions of some variables to it. Then we consider essence of drying process of polymer blend solution through comparison between results of numerical simulations of the modified model and those of the former model.
NASA Astrophysics Data System (ADS)
Bouaziz, Nadia; Ben Manaa, Marwa; Ben Lamine, Abdelmottaleb
2017-11-01
The hydrogen absorption-desorption isotherms on LaNi3.8Al1.2-xMnx alloy at temperature T = 433 K is studied through various theoretical models. The analytical expressions of these models were deduced exploiting the grand canonical ensemble in statistical physics by taking some simplifying hypotheses. Among these models an adequate model which presents a good correlation with the experimental curves has been selected. The physicochemical parameters intervening in the absorption-desorption processes and involved in the model expressions could be directly deduced from the experimental isotherms by numerical simulation. Six parameters of the model are adjusted, namely the numbers of hydrogen atoms per site n1 and n2, the receptor site densities N1m and N2m, and the energetic parameters P1 and P2. The behaviors of these parameters are discussed in relation with absorption and desorption processes to better understand and compare these phenomena. Thanks to the energetic parameters, we calculated the sorption energies which are typically ranged between 266 and 269.4 KJ/mol for absorption process and between 267 and 269.5 KJ/mol for desorption process comparable to usual chemical bond energies. Using the adopted model expression, the thermodynamic potential functions which govern the absorption/desorption process such as internal energy Eint, free enthalpy of Gibbs G and entropy Sa are derived.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
WAMA: a method of optimizing reticle/die placement to increase litho cell productivity
NASA Astrophysics Data System (ADS)
Dor, Amos; Schwarz, Yoram
2005-05-01
This paper focuses on reticle/field placement methodology issues, the disadvantages of typical methods used in the industry, and the innovative way that the WAMA software solution achieves optimized placement. Typical wafer placement methodologies used in the semiconductor industry considers a very limited number of parameters, like placing the maximum amount of die on the wafer circle and manually modifying die placement to minimize edge yield degradation. This paper describes how WAMA software takes into account process characteristics, manufacturing constraints and business objectives to optimize placement for maximum stepper productivity and maximum good die (yield) on the wafer.
Simple Heat Treatment for Production of Hot-Dip Galvanized Dual Phase Steel Using Si-Al Steels
NASA Astrophysics Data System (ADS)
Equihua-Guillén, F.; García-Lara, A. M.; Muñíz-Valdes, C. R.; Ortíz-Cuellar, J. C.; Camporredondo-Saucedo, J. E.
2014-01-01
This work presents relevant metallurgical considerations to produce galvanized dual phase steels from low cost aluminum-silicon steels which are produced by continuous strip processing. Two steels with different contents of Si and Al were austenized in the two-phase field ferrite + austenite (α + γ) in a fast manner to obtain dual phase steels, suitable for hot-dip galvanizing process, under typical parameters of continuous annealing processing line. Tensile dual phase properties were obtained from specimens cooled from temperature below Ar3, held during 3 min, intermediate cooling at temperature above Ar1 and quenching in Zn bath at 465 °C. The results have shown typical microstructure and tensile properties of galvanized dual phase steels. Finally, the synergistic effect of aluminum, silicon, and residual chromium on martensite start temperature ( M s), critical cooling rate ( C R), volume fraction of martensite, and tensile properties has been studied.
Testing the Joint UK Land Environment Simulator (JULES) for flood forecasting
NASA Astrophysics Data System (ADS)
Batelis, Stamatios-Christos; Rosolem, Rafael; Han, Dawei; Rahman, Mostaquimur
2017-04-01
Land Surface Models (LSM) are based on physics principles and simulate the exchanges of energy, water and biogeochemical cycles between the land surface and lower atmosphere. Such models are typically applied for climate studies or effects of land use changes but as the resolution of LSMs and supporting observations are continuously increasing, its representation of hydrological processes need to be addressed adequately. For example, changes in climate and land use can alter the hydrology of a region, for instance, by altering its flooding regime. LSMs can be a powerful tool because of their ability to spatially represent a region with much finer resolution. However, despite such advantages, its performance has not been extensively assessed for flood forecasting simply because its representation of typical hydrological processes, such as overland flow and river routing, are still either ignored or roughly represented. In this study, we initially test the Joint UK Land Environment Simulator (JULES) as a flood forecast tool focusing on its river routing scheme. In particular, JULES river routing parameterization is based on the Rapid Flow Model (RFM) which relies on six prescribed parameters (two surface and two subsurface wave celerities, and two return flow fractions). Although this routing scheme is simple, the prescription of its six default parameters is still too generalized. Our aim is to understand the importance of each RFM parameter in a series of JULES simulations at a number of catchments in the UK for the 2006-2015 period. This is carried out, for instance, by making a number of assumptions of parameter behaviour (e.g., spatially uniform versus varying and/or temporally constant or time-varying parameters within each catchment). Hourly rainfall radar in combination with the CHESS (Climate, Hydrological and Ecological research Support System) meteorological daily data both at 1 km2 resolution are used. The evaluation of the model is based on hourly runoff data provided by the National River Flood Archive using a number of model performance metrics. We use a calibrated conceptually-based lumped model, more typically applied in flood studies, as a benchmark for our analysis.
Ergodicity-breaking bifurcations and tunneling in hyperbolic transport models
NASA Astrophysics Data System (ADS)
Giona, M.; Brasiello, A.; Crescitelli, S.
2015-11-01
One of the main differences between parabolic transport, associated with Langevin equations driven by Wiener processes, and hyperbolic models related to generalized Kac equations driven by Poisson processes, is the occurrence in the latter of multiple stable invariant densities (Frobenius multiplicity) in certain regions of the parameter space. This phenomenon is associated with the occurrence in linear hyperbolic balance equations of a typical bifurcation, referred to as the ergodicity-breaking bifurcation, the properties of which are thoroughly analyzed.
Non-dimensional groups in the description of finite-amplitude sound propagation through aerosols
NASA Technical Reports Server (NTRS)
Scott, D. S.
1976-01-01
Several parameters, which have fairly transparent physical interpretations, appear in the analytic description of finite-amplitude sound propagation through aerosols. Typically, each of these parameters characterizes, in some sense, either the sound or the aerosol. It also turns out that fairly obvious combinations of these parameters yield non-dimensional groups which, in turn, characterize the nature of the acoustic-aerosol interaction. This theme is developed in order to illustrate how a quick examination of such parameters and groups can yield information about the nature of the processes involved, without the necessity of extensive mathematical analysis. This concept is developed primarily from the viewpoint of sound propagation through aerosols, although complimentary acoustic-aerosol interaction phenomena are briefly noted.
Giessler, Mathias; Tränckner, Jens
2018-02-01
The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis
NASA Astrophysics Data System (ADS)
Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.
2005-12-01
The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.
Transport of bacteria through geologic media may be viewed as being governed by sorption-desorption reactions. In this investigation, four facets of the process were examined: (I) the impact of sorption on bacterial transport under typical ground water flow velocities and a diffe...
Seasonal change of WEPP erodibility parameters on a fallow plot
D. K. McCool; S. Dun; J. Q. Wu; W. J. Elliot
2011-01-01
In cold regions, frozen soil has a significant influence on runoff and water erosion. Frozen soil can reduce infiltration capacity, and the freeze-thaw processes degrade soil cohesive strength and increase soil erodibility. In the Inland Pacific Northwest of the USA, major erosion events typically occur during winter from low-intensity rain, snowmelt, or both as frozen...
Minimization of operational impacts on spectrophotometer color measurements for cotton
USDA-ARS?s Scientific Manuscript database
A key cotton quality and processing property that is gaining increasing importance is the color of the cotton. Cotton fiber in the U.S. is classified for color using the Uster® High Volume Instrument (HVI), using the parameters Rd and +b. Rd and +b are specific to cotton fiber and are not typical ...
Orejas, Jaime; Pfeuffer, Kevin P; Ray, Steven J; Pisonero, Jorge; Sanz-Medel, Alfredo; Hieftje, Gary M
2014-11-01
Ambient desorption/ionization (ADI) sources coupled to mass spectrometry (MS) offer outstanding analytical features: direct analysis of real samples without sample pretreatment, combined with the selectivity and sensitivity of MS. Since ADI sources typically work in the open atmosphere, ambient conditions can affect the desorption and ionization processes. Here, the effects of internal source parameters and ambient humidity on the ionization processes of the flowing atmospheric pressure afterglow (FAPA) source are investigated. The interaction of reagent ions with a range of analytes is studied in terms of sensitivity and based upon the processes that occur in the ionization reactions. The results show that internal parameters which lead to higher gas temperatures afforded higher sensitivities, although fragmentation is also affected. In the case of humidity, only extremely dry conditions led to higher sensitivities, while fragmentation remained unaffected.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Continued Data Acquisition Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwellenbach, David
This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muonmore » tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.« less
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Interpreting the Weibull fitting parameters for diffusion-controlled release data
NASA Astrophysics Data System (ADS)
Ignacio, Maxime; Chubynsky, Mykyta V.; Slater, Gary W.
2017-11-01
We examine the diffusion-controlled release of molecules from passive delivery systems using both analytical solutions of the diffusion equation and numerically exact Lattice Monte Carlo data. For very short times, the release process follows a √{ t } power law, typical of diffusion processes, while the long-time asymptotic behavior is exponential. The crossover time between these two regimes is determined by the boundary conditions and initial loading of the system. We show that while the widely used Weibull function provides a reasonable fit (in terms of statistical error), it has two major drawbacks: (i) it does not capture the correct limits and (ii) there is no direct connection between the fitting parameters and the properties of the system. Using a physically motivated interpolating fitting function that correctly includes both time regimes, we are able to predict the values of the Weibull parameters which allows us to propose a physical interpretation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peryshkin, A. Yu., E-mail: alexb700@yandex.ru; Makarov, P. V., E-mail: bacardi@ispms.ru; Eremin, M. O., E-mail: bacardi@ispms.ru
An evolutionary approach proposed in [1, 2] combining the achievements of traditional macroscopic theory of solid mechanics and basic ideas of nonlinear dynamics is applied in a numerical simulation of present-day tectonic plates motion and seismic process in Central Asia. Relative values of strength parameters of rigid blocks with respect to the soft zones were characterized by the δ parameter that was varied in the numerical experiments within δ = 1.1–1.8 for different groups of the zonal-block divisibility. In general, the numerical simulations of tectonic block motion and accompanying seismic process in the model geomedium indicate that the numerical solutionsmore » of the solid mechanics equations characterize its deformation as a typical behavior of a nonlinear dynamic system under conditions of self-organized criticality.« less
Restoration of motion blurred images
NASA Astrophysics Data System (ADS)
Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.
2017-08-01
Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.
NASA Astrophysics Data System (ADS)
Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula
2018-03-01
Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.
Using Noise and Fluctuations for In Situ Measurements of Nitrogen Diffusion Depth.
Samoila, Cornel; Ursutiu, Doru; Schleer, Walter-Harald; Jinga, Vlad; Nascov, Victor
2016-10-05
In manufacturing processes involving diffusion (of C, N, S, etc.), the evolution of the layer depth is of the utmost importance: the success of the entire process depends on this parameter. Currently, nitriding is typically either calibrated using a "post process" method or controlled via indirect measurements (H2, O2, H2O + CO2). In the absence of "in situ" monitoring, any variation in the process parameters (gas concentration, temperature, steel composition, distance between sensors and furnace chamber) can cause expensive process inefficiency or failure. Indirect measurements can prevent process failure, but uncertainties and complications may arise in the relationship between the measured parameters and the actual diffusion process. In this paper, a method based on noise and fluctuation measurements is proposed that offers direct control of the layer depth evolution because the parameters of interest are measured in direct contact with the nitrided steel (represented by the active electrode). The paper addresses two related sets of experiments. The first set of experiments consisted of laboratory tests on nitrided samples using Barkhausen noise and yieded a linear relationship between the frequency exponent in the Hooge equation and the nitriding time. For the second set, a specific sensor based on conductivity noise (at the nitriding temperature) was built for shop-floor experiments. Although two different types of noise were measured in these two sets of experiments, the use of the frequency exponent to monitor the process evolution remained valid.
NASA Astrophysics Data System (ADS)
Flores, J. C.
2015-12-01
For ancient civilizations, the shift from disorder to organized urban settlements is viewed as a phase-transition simile. The number of monumental constructions, assumed to be a signature of civilization processes, corresponds to the order parameter, and effective connectivity becomes related to the control parameter. Based on parameter estimations from archaeological and paleo-climatological data, this study analyzes the rise and fall of the ancient Caral civilization on the South Pacific coast during a period of small ENSO fluctuations (approximately 4500 BP). Other examples considered include civilizations on Easter Island and the Maya Lowlands. This work considers a typical nonlinear third order evolution equation and numerical simulations.
Tavano, Alessandro; Pesarin, Anna; Murino, Vittorio; Cristani, Marco
2014-01-01
Individuals with Asperger syndrome/High Functioning Autism fail to spontaneously attribute mental states to the self and others, a life-long phenotypic characteristic known as mindblindness. We hypothesized that mindblindness would affect the dynamics of conversational interaction. Using generative models, in particular Gaussian mixture models and observed influence models, conversations were coded as interacting Markov processes, operating on novel speech/silence patterns, termed Steady Conversational Periods (SCPs). SCPs assume that whenever an agent's process changes state (e.g., from silence to speech), it causes a general transition of the entire conversational process, forcing inter-actant synchronization. SCPs fed into observed influence models, which captured the conversational dynamics of children and adolescents with Asperger syndrome/High Functioning Autism, and age-matched typically developing participants. Analyzing the parameters of the models by means of discriminative classifiers, the dialogs of patients were successfully distinguished from those of control participants. We conclude that meaning-free speech/silence sequences, reflecting inter-actant synchronization, at least partially encode typical and atypical conversational dynamics. This suggests a direct influence of theory of mind abilities onto basic speech initiative behavior. PMID:24489674
Puyuelo, B; Gea, T; Sánchez, A
2014-08-01
In this study, we have evaluated different strategies for the optimization of the aeration during the active thermophilic stage of the composting process of source-selected Organic Fraction of Municipal Solid Waste (or biowaste) using reactors at bench scale (50L). These strategies include: typical cyclic aeration, oxygen feedback controller and a new self-developed controller based on the on-line maximization of the oxygen uptake rate (OUR) during the process. Results highlight differences found in the emission of most representative greenhouse gases (GHG) emitted from composting (methane and nitrous oxide) as well as in gases typically related to composting odor problems (ammonia as typical example). Specifically, the cyclic controller presents emissions that can double that of OUR controller, whereas oxygen feedback controller shows a better performance with respect to the cyclic controller. A new parameter, the respiration index efficiency, is presented to quantitatively evaluate the GHG emissions and, in consequence, the main negative environmental impact of the composting process. Other aspects such as the stability of the compost produced and the consumption of resources are also evaluated for each controller. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effects of temperature and mass conservation on the typical chemical sequences of hydrogen oxidation
NASA Astrophysics Data System (ADS)
Nicholson, Schuyler B.; Alaghemandi, Mohammad; Green, Jason R.
2018-01-01
Macroscopic properties of reacting mixtures are necessary to design synthetic strategies, determine yield, and improve the energy and atom efficiency of many chemical processes. The set of time-ordered sequences of chemical species are one representation of the evolution from reactants to products. However, only a fraction of the possible sequences is typical, having the majority of the joint probability and characterizing the succession of chemical nonequilibrium states. Here, we extend a variational measure of typicality and apply it to atomistic simulations of a model for hydrogen oxidation over a range of temperatures. We demonstrate an information-theoretic methodology to identify typical sequences under the constraints of mass conservation. Including these constraints leads to an improved ability to learn the chemical sequence mechanism from experimentally accessible data. From these typical sequences, we show that two quantities defining the variational typical set of sequences—the joint entropy rate and the topological entropy rate—increase linearly with temperature. These results suggest that, away from explosion limits, data over a narrow range of thermodynamic parameters could be sufficient to extrapolate these typical features of combustion chemistry to other conditions.
Cao, Jianping; Xiong, Jianyin; Wang, Lixin; Xu, Ying; Zhang, Yinping
2016-09-06
Solid-phase microextraction (SPME) is regarded as a nonexhaustive sampling technique with a smaller extraction volume and a shorter extraction time than traditional sampling techniques and is hence widely used. The SPME sampling process is affected by the convection or diffusion effect along the coating surface, but this factor has seldom been studied. This paper derives an analytical model to characterize SPME sampling for semivolatile organic compounds (SVOCs) as well as for volatile organic compounds (VOCs) by considering the surface mass transfer process. Using this model, the chemical concentrations in a sample matrix can be conveniently calculated. In addition, the model can be used to determine the characteristic parameters (partition coefficient and diffusion coefficient) for typical SPME chemical samplings (SPME calibration). Experiments using SPME samplings of two typical SVOCs, dibutyl phthalate (DBP) in sealed chamber and di(2-ethylhexyl) phthalate (DEHP) in ventilated chamber, were performed to measure the two characteristic parameters. The experimental results demonstrated the effectiveness of the model and calibration method. Experimental data from the literature (VOCs sampled by SPME) were used to further validate the model. This study should prove useful for relatively rapid quantification of concentrations of different chemicals in various circumstances with SPME.
NASA Astrophysics Data System (ADS)
Zhuang, Jyun-Rong; Lee, Yee-Ting; Hsieh, Wen-Hsin; Yang, An-Shik
2018-07-01
Selective laser melting (SLM) shows a positive prospect as an additive manufacturing (AM) technique for fabrication of 3D parts with complicated structures. A transient thermal model was developed by the finite element method (FEM) to simulate the thermal behavior for predicting the time evolution of temperature field and melt pool dimensions of Ti6Al4V powder during SLM. The FEM predictions were then compared with published experimental measurements and calculation results for model validation. This study applied the design of experiment (DOE) scheme together with the response surface method (RSM) to conduct the regression analysis based on four processing parameters (exactly, the laser power, scanning speed, preheating temperature and hatch space) for predicting the dimensions of the melt pool in SLM. The preliminary RSM results were used to quantify the effects of those parameters on the melt pool size. The process window was further implemented via two criteria of the width and depth of the molten pool to screen impractical conditions of four parameters for including the practical ranges of processing parameters. The FEM simulations confirmed the good accuracy of the critical RSM models in the predictions of melt pool dimensions for three typical SLM working scenarios.
Surface modification by electrolytic plasma processing for high Nb-TiAl alloys
NASA Astrophysics Data System (ADS)
Gui, Wanyuan; Hao, Guojian; Liang, Yongfeng; Li, Feng; Liu, Xiao; Lin, Junpin
2016-12-01
Metal surface modification by electrolytic plasma processing (EPP) is an innovative treatment widely commonly applied to material processing and pretreatment process of coating and galvanization. EPP involves complex processes and a great deal of parameters, such as preset voltage, current, solution temperature and processing time. Several characterization methods are presented in this paper for evaluating the micro-structure surfaces of Ti45Al8Nb alloys: SEM, EDS, XRD and 3D topography. The results showed that the oxide scale and other contaminants on the surface of Ti45Al8Nb alloys can be effectively removed via EPP. The typical micro-crater structure of the surface of Ti45Al8Nb alloys were observed by 3D topography after EPP to find that the mean diameter of the surface structure and roughness value can be effectively controlled by altering the processing parameters. The mechanical properties of the surface according to nanomechanical probe testing exhibited slight decrease in microhardness and elastic modulus after EPP, but a dramatic increase in surface roughness, which is beneficial for further processing or coating.
Using Noise and Fluctuations for In Situ Measurements of Nitrogen Diffusion Depth
Samoila, Cornel; Ursutiu, Doru; Schleer, Walter-Harald; Jinga, Vlad; Nascov, Victor
2016-01-01
In manufacturing processes involving diffusion (of C, N, S, etc.), the evolution of the layer depth is of the utmost importance: the success of the entire process depends on this parameter. Currently, nitriding is typically either calibrated using a “post process” method or controlled via indirect measurements (H2, O2, H2O + CO2). In the absence of “in situ” monitoring, any variation in the process parameters (gas concentration, temperature, steel composition, distance between sensors and furnace chamber) can cause expensive process inefficiency or failure. Indirect measurements can prevent process failure, but uncertainties and complications may arise in the relationship between the measured parameters and the actual diffusion process. In this paper, a method based on noise and fluctuation measurements is proposed that offers direct control of the layer depth evolution because the parameters of interest are measured in direct contact with the nitrided steel (represented by the active electrode). The paper addresses two related sets of experiments. The first set of experiments consisted of laboratory tests on nitrided samples using Barkhausen noise and yielded a linear relationship between the frequency exponent in the Hooge equation and the nitriding time. For the second set, a specific sensor based on conductivity noise (at the nitriding temperature) was built for shop-floor experiments. Although two different types of noise were measured in these two sets of experiments, the use of the frequency exponent to monitor the process evolution remained valid. PMID:28773941
NASA Astrophysics Data System (ADS)
Bouaziz, Nadia; Ben Manaa, Marwa; Ben Lamine, Abdelmottaleb
2018-06-01
In the present work, experimental absorption and desorption isotherms of hydrogen in LaNi3.8Al1.0Mn0.2 metal at two temperatures (T = 433 K, 453 K) have been fitted using a monolayer model with two energies treated by statistical physics formalism by means of the grand canonical ensemble. Six parameters of the model are adjusted, namely the numbers of hydrogen atoms per site nα and nβ, the receptor site densities Nmα and Nmβ, and the energetic parameters Pα and Pβ. The behaviors of these parameters are discussed in relationship with temperature of absorption/desorption process. Then, a dynamic investigation of the simultaneous evolution with pressure of the two α and β phases in the absorption and desorption phenomena using the adjustment parameters. Thanks to the energetic parameters, we calculated the sorption energies which are typically ranged between 276.107 and 310.711 kJ/mol for absorption process and between 277.01 and 310.9 kJ/mol for desorption process comparable to usual chemical bond energies. The calculated thermodynamic parameters such as entropy, Gibbs free energy and internal energy from experimental data showed that the absorption/desorption of hydrogen in LaNi3.8Al1.0Mn0.2 alloy was feasible, spontaneous and exothermic in nature.
NASA Technical Reports Server (NTRS)
Kubota, H.
1976-01-01
A simplified analytical method for calculation of thermal response within a transpiration-cooled porous heat shield material in an intense radiative-convective heating environment is presented. The essential assumptions of the radiative and convective transfer processes in the heat shield matrix are the two-temperature approximation and the specified radiative-convective heatings of the front surface. Sample calculations for porous silica with CO2 injection are presented for some typical parameters of mass injection rate, porosity, and material thickness. The effect of these parameters on the cooling system is discussed.
Applications of DC-Self Bias in CCP Deposition Systems
NASA Astrophysics Data System (ADS)
Keil, D. L.; Augustyniak, E.; Sakiyama, Y.
2013-09-01
In many commercial CCP plasma process systems the DC-self bias is available as a reported process parameter. Since commercial systems typically limit the number of onboard diagnostics, there is great incentive to understand how DC-self bias can be expected to respond to various system perturbations. This work reviews and examines DC self bias changes in response to tool aging, chamber film accumulation and wafer processing. The diagnostic value of the DC self bias response to transient and various steady state current draw schemes are examined. Theoretical models and measured experimental results are compared and contrasted.
NASA Astrophysics Data System (ADS)
Nicolosi, L.; Abt, F.; Blug, A.; Heider, A.; Tetzlaff, R.; Höfler, H.
2012-01-01
Real-time monitoring of laser beam welding (LBW) has increasingly gained importance in several manufacturing processes ranging from automobile production to precision mechanics. In the latter, a novel algorithm for the real-time detection of spatters was implemented in a camera based on cellular neural networks. The latter can be connected to the optics of commercially available laser machines leading to real-time monitoring of LBW processes at rates up to 15 kHz. Such high monitoring rates allow the integration of other image evaluation tasks such as the detection of the full penetration hole for real-time control of process parameters.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Standardization of pitch-range settings in voice acoustic analysis.
Vogel, Adam P; Maruff, Paul; Snyder, Peter J; Mundt, James C
2009-05-01
Voice acoustic analysis is typically a labor-intensive, time-consuming process that requires the application of idiosyncratic parameters tailored to individual aspects of the speech signal. Such processes limit the efficiency and utility of voice analysis in clinical practice as well as in applied research and development. In the present study, we analyzed 1,120 voice files, using standard techniques (case-by-case hand analysis), taking roughly 10 work weeks of personnel time to complete. The results were compared with the analytic output of several automated analysis scripts that made use of preset pitch-range parameters. After pitch windows were selected to appropriately account for sex differences, the automated analysis scripts reduced processing time of the 1,120 speech samples to less than 2.5 h and produced results comparable to those obtained with hand analysis. However, caution should be exercised when applying the suggested preset values to pathological voice populations.
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
A new lumped-parameter model for flow in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.
A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less
Real-Time Signal Processing Systems
1992-10-29
Programmer’s Model 50 15. Synchronization 67 16. Parameter Passage to Routines VIA Stacks 68 17. Typical VPH Activity Flow Chart 70 18. CPH...computing facilities to take advantage of cost effective solutions. A proliferation of different microprocessors and development systems spread among the... activities are completed, the roles of the VPH memory banks are reversed. This function-swapping is the primary reason, for the efficiency and high
Impact Dynamics: Theory and Experiment
1980-10-01
in the HEMP QHydrodynamic, Elastic, Magneto & Plastic ) code, employ a quadrilateral grid and may be solved in plane coordinates or with cylindrical...material constitution, strain rate. localized plastic flow, and failure are manifest at various stages of the impact process. Typically, loading and...STRENGTH; DENSITY /A DOMINANT’ PARAMETER 104 - 500-1000ms-1 VISCOUS-MATERIAL POWDER GUNS STRENGTH STILL SIGNIFICANT 10 2 50- 500 ms- PRIMARILY PLASTIC
Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)
NASA Astrophysics Data System (ADS)
Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand
2018-03-01
Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Carr, Brian I.; Giannini, Edoardo G.; Farinati, Fabio; Ciccarese, Francesca; Rapaccini, Gian Ludovico; Marco, Maria Di; Benvegnù, Luisa; Zoli, Marco; Borzio, Franco; Caturelli, Eugenio; Chiaramonte, Maria; Trevisani, Franco
2014-01-01
Background Previous work has shown that 2 general processes contribute to hepatocellular cancer (HCC) prognosis. They are: a. liver damage, monitored by indices such as blood bilirubin, prothrombin time and AST; as well as b. tumor biology, monitored by indices such as tumor size, tumor number, presence of PVT and blood AFP levels. These 2 processes may affect one another, with prognostically significant interactions between multiple tumor and host parameters. These interactions form a context that provide personalization of the prognostic meaning of these factors for every patient. Thus, a given level of bilirubin or tumor diameter might have a different significance in different personal contexts. We previously applied Network Phenotyping Strategy (NPS) to characterize interactions between liver function indices of Asian HCC patients and recognized two clinical phenotypes, S and L, differing in tumor size and tumor nodule numbers. Aims To validate the applicability of the NPS-based HCC S/L classification on an independent European HCC cohort, for which survival information was additionally available. Methods Four sets of peripheral blood parameters, including AFP-platelets, derived from routine blood parameter levels and tumor indices from the ITA.LI.CA database, were analyzed using NPS, a graph-theory based approach, which compares personal patterns of complete relationships between clinical data values to reference patterns with significant association to disease outcomes. Results Without reference to the actual tumor sizes, patients were classified by NPS into 2 subgroups with S and L phenotypes. These two phenotypes were recognized using solely the HCC screening test results, consisting of eight common blood parameters, paired by their significant correlations, including an AFP-Platelets relationship. These trends were combined with patient age, gender and self-reported alcoholism into NPS personal patient profiles. We subsequently validated (using actual scan data) that patients in L phenotype group had 1.5x larger mean tumor masses relative to S, p=6×10−16. Importantly, with the new data, liver test pattern-identified S-phenotype patients had typically 1.7 × longer survival compared to L-phenotype. NPS integrated the liver, tumor and basic demographic factors. Cirrhosis associated thrombocytopenia was typical for smaller S-tumors. In L-tumor phenotype, typical platelet levels increased with the tumor mass. Hepatic inflammation and tumor factors contributed to more aggressive L tumors, with parenchymal destruction and shorter survival. Summary NPS provides integrative interpretation for HCC behavior, identifying two tumor and survival phenotypes by clinical parameter patterns. The NPS classifier is provided as an Excel tool. The NPS system shows the importance of considering each tumor marker and parameter in the total context of all the other parameters of an individual patient. PMID:25023357
Water absorption characteristics and structural properties of rice for sake brewing.
Mizuma, Tomochika; Kiyokawa, Yoshifumi; Wakai, Yoshinori
2008-09-01
This study investigated the water absorption curve characteristics and structural properties of rice used for sake brewing. The parameter values in the water absorption rate equation were calculated using experimental data. Differences between sample parameters for rice used for sake brewing and typical rice were confirmed. The water absorption curve for rice suitable for sake brewing showed a quantitatively sharper turn in the S-shaped water absorption curve than that of typical rice. Structural characteristics, including specific volume, grain density, and powdered density of polished rice, were measured by a liquid substitution method using a Gay-Lussac pycnometer. In addition, we calculated internal porosity from whole grain and powdered grain densities. These results showed that a decrease in internal porosity resulted from invasion of water into the rice grain, and that a decrease in the grain density affected expansion during the water absorption process. A characteristic S-shape water absorption curve for rice suitable for sake brewing was related to the existence of an invisible Shinpaku-like structure.
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
Influence of operational parameters on electro-Fenton degradation of organic pollutants from soil.
Rosales, E; Pazos, M; Longo, M A; Sanroman, M A
2009-09-01
The combination of the Fenton's reagent with electrochemistry (the electro-Fenton process) represents an efficient method for wastewater treatment. This study describes the use of this process to clean soil or clay contaminated by organic compounds. Model soil of kaolinite clay polluted with the dye Lissamine Green B (LGB) was used to evaluate the capability of the electro-Fenton process. The effects of operating parameters such as electrode material and dye concentration were investigated. Operating in an electrochemical cell under optimized conditions while using electrodes of graphite, a constant potential difference of 5 V, pH 3, 0.2 mM FeSO(4). 7H(2)O, and electrolyte 0.1 M Na(2)SO(4), around 80% of the LGB dye on kaolinite clay was decolorized after 3 hours with an electric power consumption around 0.15 W h g(-1). Furthermore, the efficiency of this process for the remediation of a real soil polluted with phenanthrene, a typical polycyclic aromatic hydrocarbon, has been demonstrated.
Fast machine-learning online optimization of ultra-cold-atom experiments.
Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R
2016-05-16
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.
Fast machine-learning online optimization of ultra-cold-atom experiments
Wigley, P. B.; Everitt, P. J.; van den Hengel, A.; Bastian, J. W.; Sooriyabandara, M. A.; McDonald, G. D.; Hardman, K. S.; Quinlivan, C. D.; Manju, P.; Kuhn, C. C. N.; Petersen, I. R.; Luiten, A. N.; Hope, J. J.; Robins, N. P.; Hush, M. R.
2016-01-01
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system. PMID:27180805
NASA Astrophysics Data System (ADS)
Sago, James Alan
Metal Injection Molding (MIM) is one of the most rapidly growing areas of powder metallurgy (P/M) but the growth of MIM into new markets and more demanding applications is limited by two fundamental barriers, the availability of low cost metal powders and a lack of knowledge and understanding of how mechanical properties, especially toughness, are affected by the many parameters in the MIM process. The goals of this study were to investigate solutions to these challenges for MIM. Mechanical alloying (MA) is a technique which can produce a wide variety of powder compositions in a size range suited to MIM and in smaller batches. However MA typically suffers from low production volumes and long milling times. This study will show that a saucer mill can produce sizable volumes of MA powders in times typically less than an hour. The MA process was also used to produce powders of 17-4PH stainless steel and the NiTi shape memory alloy for a MIM feedstock. This study shows that the MA powder characteristics led to successful MIM processing of parts. Previous studies have shown that the toughness of individual MIM parts can vary widely within a single production run and from one producer to another. In the last part of the study a Design of Experiments (DOE) approach was used to evaluate the effects of MIM processing parameters on the mechanical properties. Analysis of Variance produced mathematical models for Charpy impact toughness, hardness, density, and carbon content. Tensile properties did not produce a good model due to processing problems. The models and recommendations for improving both toughness and reproducibility of toughness are presented.
NASA Astrophysics Data System (ADS)
Shahbudin, S. N. A.; Othman, M. H.; Amin, Sri Yulis M.; Ibrahim, M. H. I.
2017-08-01
This article is about a review of optimization of metal injection molding and microwave sintering process on tungsten cemented carbide produce by metal injection molding process. In this study, the process parameters for the metal injection molding were optimized using Taguchi method. Taguchi methods have been used widely in engineering analysis to optimize the performance characteristics through the setting of design parameters. Microwave sintering is a process generally being used in powder metallurgy over the conventional method. It has typical characteristics such as accelerated heating rate, shortened processing cycle, high energy efficiency, fine and homogeneous microstructure, and enhanced mechanical performance, which is beneficial to prepare nanostructured cemented carbides in metal injection molding. Besides that, with an advanced and promising technology, metal injection molding has proven that can produce cemented carbides. Cemented tungsten carbide hard metal has been used widely in various applications due to its desirable combination of mechanical, physical, and chemical properties. Moreover, areas of study include common defects in metal injection molding and application of microwave sintering itself has been discussed in this paper.
NASA Astrophysics Data System (ADS)
Gora, Wojciech S.; Tian, Yingtao; Cabo, Aldara Pan; Ardron, Marcus; Maier, Robert R. J.; Prangnell, Philip; Weston, Nicholas J.; Hand, Duncan P.
Additive manufacturing (AM) offers the possibility of creating a complex free form object as a single element, which is not possible using traditional mechanical machining. Unfortunately the typically rough surface finish of additively manufactured parts is unsuitable for many applications. As a result AM parts must be post-processed; typically mechanically machined and/or and polished using either chemical or mechanical techniques (both of which have their limitations). Laser based polishing is based on remelting of a very thin surface layer and it offers potential as a highly repeatable, higher speed process capable of selective area polishing, and without any waste problems (no abrasives or liquids). In this paper an in-depth investigation of CW laser polishing of titanium and cobalt chrome AM elements is presented. The impact of different scanning strategies, laser parameters and initial surface condition on the achieved surface finish is evaluated.
Bayesian parameter estimation for stochastic models of biological cell migration
NASA Astrophysics Data System (ADS)
Dieterich, Peter; Preuss, Roland
2013-08-01
Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.
NASA Astrophysics Data System (ADS)
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R
2017-01-04
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123
Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan
2018-06-01
An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.
Group Contribution Methods for Phase Equilibrium Calculations.
Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian
2015-01-01
The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.
Prediction and typicality in multiverse cosmology
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2014-02-01
In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.
Hybrid modeling and empirical analysis of automobile supply chain network
NASA Astrophysics Data System (ADS)
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
Mathematical Modeling of RNA-Based Architectures for Closed Loop Control of Gene Expression.
Agrawal, Deepak K; Tang, Xun; Westbrook, Alexandra; Marshall, Ryan; Maxwell, Colin S; Lucks, Julius; Noireaux, Vincent; Beisel, Chase L; Dunlop, Mary J; Franco, Elisa
2018-05-08
Feedback allows biological systems to control gene expression precisely and reliably, even in the presence of uncertainty, by sensing and processing environmental changes. Taking inspiration from natural architectures, synthetic biologists have engineered feedback loops to tune the dynamics and improve the robustness and predictability of gene expression. However, experimental implementations of biomolecular control systems are still far from satisfying performance specifications typically achieved by electrical or mechanical control systems. To address this gap, we present mathematical models of biomolecular controllers that enable reference tracking, disturbance rejection, and tuning of the temporal response of gene expression. These controllers employ RNA transcriptional regulators to achieve closed loop control where feedback is introduced via molecular sequestration. Sensitivity analysis of the models allows us to identify which parameters influence the transient and steady state response of a target gene expression process, as well as which biologically plausible parameter values enable perfect reference tracking. We quantify performance using typical control theory metrics to characterize response properties and provide clear selection guidelines for practical applications. Our results indicate that RNA regulators are well-suited for building robust and precise feedback controllers for gene expression. Additionally, our approach illustrates several quantitative methods useful for assessing the performance of biomolecular feedback control systems.
An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.
Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero
2017-04-01
The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.
GRCop-84 Rolling Parameter Study
NASA Technical Reports Server (NTRS)
Loewenthal, William S.; Ellis, David L.
2008-01-01
This report is a section of the final report on the GRCop-84 task of the Constellation Program and incorporates the results obtained between October 2000 and September 2005, when the program ended. NASA Glenn Research Center (GRC) has developed a new copper alloy, GRCop-84 (Cu-8 at.% Cr-4 at.% Nb), for rocket engine main combustion chamber components that will improve rocket engine life and performance. This work examines the sensitivity of GRCop-84 mechanical properties to rolling parameters as a means to better define rolling parameters for commercial warm rolling. Experiment variables studied were total reduction, rolling temperature, rolling speed, and post rolling annealing heat treatment. The responses were tensile properties measured at 23 and 500 C, hardness, and creep at three stress-temperature combinations. Understanding these relationships will better define boundaries for a robust commercial warm rolling process. The four processing parameters were varied within limits consistent with typical commercial production processes. Testing revealed that the rolling-related variables selected have a minimal influence on tensile, hardness, and creep properties over the range of values tested. Annealing had the expected result of lowering room temperature hardness and strength while increasing room temperature elongations with 600 C (1112 F) having the most effect. These results indicate that the process conditions to warm roll plate and sheet for these variables can range over wide levels without negatively impacting mechanical properties. Incorporating broader process ranges in future rolling campaigns should lower commercial rolling costs through increased productivity.
NASA Technical Reports Server (NTRS)
Vigue, Y.; Lichten, S. M.; Muellerschoen, R. J.; Blewitt, G.; Heflin, M. B.
1993-01-01
Data collected from a worldwide 1992 experiment were processed at JPL to determine precise orbits for the satellites of the Global Positioning System (GPS). A filtering technique was tested to improve modeling of solar-radiation pressure force parameters for GPS satellites. The new approach improves orbit quality for eclipsing satellites by a factor of two, with typical results in the 25- to 50-cm range. The resultant GPS-based estimates for geocentric coordinates of the tracking sites, which include the three DSN sites, are accurate to 2 to 8 cm, roughly equivalent to 3 to 10 nrad of angular measure.
Investigation of needleless electrospun PAN nanofiber mats
NASA Astrophysics Data System (ADS)
Sabantina, Lilia; Mirasol, José Rodríguez; Cordero, Tomás; Finsterbusch, Karin; Ehrmann, Andrea
2018-04-01
Polyacrylonitrile (PAN) can be spun from a nontoxic solvent (DMSO, dimethyl sulfoxide) and is nevertheless waterproof, opposite to the biopolymers which are spinnable from aqueous solutions. This makes PAN an interesting material for electrospinning nanofiber mats which can be used for diverse biotechnological or medical applications, such as filters, cell growth, wound healing or tissue engineering. On the other hand, PAN is a typical base material for producing carbon nanofibers. Nevertheless, electrospinning PAN necessitates convenient spinning parameters to create nanofibers without too many membranes or agglomerations. Thus we have studied the influence of spinning parameters on the needleless electrospinning process of PAN dissolved in DMSO and the resulting nanofiber mats.
NASA Astrophysics Data System (ADS)
Colla, V.; Desanctis, M.; Dimatteo, A.; Lovicu, G.; Valentini, R.
2011-09-01
The purpose of the present work is the implementation and validation of a model able to predict the microstructure changes and the mechanical properties in the modern high-strength dual-phase steels after the continuous annealing process line (CAPL) and galvanizing (Galv) process. Experimental continuous cooling transformation (CCT) diagrams for 13 differently alloying dual-phase steels were measured by dilatometry from the intercritical range and were used to tune the parameters of the microstructural prediction module of the model. Mechanical properties and microstructural features were measured for more than 400 dual-phase steels simulating the CAPL and Galv industrial process, and the results were used to construct the mechanical model that predicts mechanical properties from microstructural features, chemistry, and process parameters. The model was validated and proved its efficiency in reproducing the transformation kinetic and mechanical properties of dual-phase steels produced by typical industrial process. Although it is limited to the dual-phase grades and chemical compositions explored, this model will constitute a useful tool for the steel industry.
Automation of extrusion of porous cable products based on a digital controller
NASA Astrophysics Data System (ADS)
Chostkovskii, B. K.; Mitroshin, V. N.
2017-07-01
This paper presents a new approach to designing an automated system for monitoring and controlling the process of applying porous insulation material on a conductive cable core, which is based on using structurally and parametrically optimized digital controllers of an arbitrary order instead of calculating typical PID controllers using known methods. The digital controller is clocked by signals from the clock length sensor of a measuring wheel, instead of a timer signal, and this provides the robust properties of the system with respect to the changing insulation speed. Digital controller parameters are tuned to provide the operating parameters of the manufactured cable using a simulation model of stochastic extrusion and are minimized by moving a regular simplex in the parameter space of the tuned controller.
Advanced Method to Estimate Fuel Slosh Simulation Parameters
NASA Technical Reports Server (NTRS)
Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl
2005-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the estimation approach to a simple, accurately modeled system, its effectiveness and accuracy can be evaluated. The same experimental setup can then be used with fluid-filled tanks to further evaluate the effectiveness of the process. Ultimately, the proven process can be applied to the full-sized spinning experimental setup to quickly and accurately determine the slosh model parameters for a particular spacecraft mission. Automating the parameter identification process will save time, allow more changes to be made to proposed designs, and lower the cost in the initial design stages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuñez-Cumplido, E., E-mail: ejnc-mccg@hotmail.com; Hernandez-Armas, J.; Perez-Calatayud, J.
2015-08-15
Purpose: In clinical practice, specific air kerma strength (S{sub K}) value is used in treatment planning system (TPS) permanent brachytherapy implant calculations with {sup 125}I and {sup 103}Pd sources; in fact, commercial TPS provide only one S{sub K} input value for all implanted sources and the certified shipment average is typically used. However, the value for S{sub K} is dispersed: this dispersion is not only due to the manufacturing process and variation between different source batches but also due to the classification of sources into different classes according to their S{sub K} values. The purpose of this work is tomore » examine the impact of S{sub K} dispersion on typical implant parameters that are used to evaluate the dose volume histogram (DVH) for both planning target volume (PTV) and organs at risk (OARs). Methods: The authors have developed a new algorithm to compute dose distributions with different S{sub K} values for each source. Three different prostate volumes (20, 30, and 40 cm{sup 3}) were considered and two typical commercial sources of different radionuclides were used. Using a conventional TPS, clinically accepted calculations were made for {sup 125}I sources; for the palladium, typical implants were simulated. To assess the many different possible S{sub K} values for each source belonging to a class, the authors assigned an S{sub K} value to each source in a randomized process 1000 times for each source and volume. All the dose distributions generated for each set of simulations were assessed through the DVH distributions comparing with dose distributions obtained using a uniform S{sub K} value for all the implanted sources. The authors analyzed several dose coverage (V{sub 100} and D{sub 90}) and overdosage parameters for prostate and PTV and also the limiting and overdosage parameters for OARs, urethra and rectum. Results: The parameters analyzed followed a Gaussian distribution for the entire set of computed dosimetries. PTV and prostate V{sub 100} and D{sub 90} variations ranged between 0.2% and 1.78% for both sources. Variations for the overdosage parameters V{sub 150} and V{sub 200} compared to dose coverage parameters were observed and, in general, variations were larger for parameters related to {sup 125}I sources than {sup 103}Pd sources. For OAR dosimetry, variations with respect to the reference D{sub 0.1cm{sup 3}} were observed for rectum values, ranging from 2% to 3%, compared with urethra values, which ranged from 1% to 2%. Conclusions: Dose coverage for prostate and PTV was practically unaffected by S{sub K} dispersion, as was the maximum dose deposited in the urethra due to the implant technique geometry. However, the authors observed larger variations for the PTV V{sub 150}, rectum V{sub 100}, and rectum D{sub 0.1cm{sup 3}} values. The variations in rectum parameters were caused by the specific location of sources with S{sub K} value that differed from the average in the vicinity. Finally, on comparing the two sources, variations were larger for {sup 125}I than for {sup 103}Pd. This is because for {sup 103}Pd, a greater number of sources were used to obtain a valid dose distribution than for {sup 125}I, resulting in a lower variation for each S{sub K} value for each source (because the variations become averaged out statistically speaking)« less
Dynamic Statistical Characterization of Variation in Source Processes of Microseismic Events
NASA Astrophysics Data System (ADS)
Smith-Boughner, L.; Viegas, G. F.; Urbancic, T.; Baig, A. M.
2015-12-01
During a hydraulic fracture, water is pumped at high pressure into a formation. A proppant, typically sand is later injected in the hope that it will make its way into a fracture, keep it open and provide a path for the hydrocarbon to enter the well. This injection can create micro-earthquakes, generated by deformation within the reservoir during treatment. When these injections are monitored, thousands of microseismic events are recorded within several hundred cubic meters. For each well-located event, many source parameters are estimated e.g. stress drop, Savage-Wood efficiency and apparent stress. However, because we are evaluating outputs from a power-law process, the extent to which the failure is impacted by fluid injection or stress triggering is not immediately clear. To better detect differences in source processes, we use a set of dynamic statistical parameters which characterize various force balance assumptions using the average distance to the nearest event, event rate, volume enclosed by the events, cumulative moment and energy from a group of events. One parameter, the Fracability index, approximates the ratio of viscous to elastic forcing and highlights differences in the response time of a rock to changes in stress. These dynamic parameters are applied to a database of more than 90 000 events in a shale-gas play in the Horn River Basin to characterize spatial-temporal variations in the source processes. In order to resolve these differences, a moving window, nearest neighbour approach was used. First, the center of mass of the local distribution was estimated for several source parameters. Then, a set of dynamic parameters, which characterize the response of the rock were estimated. These techniques reveal changes in seismic efficiency and apparent stress and often coincide with marked changes in the Fracability index and other dynamic statistical parameters. Utilizing these approaches allowed for the characterization of fluid injection related processes.
Caredda, Marco; Addis, Margherita; Pes, Massimo; Fois, Nicola; Sanna, Gabriele; Piredda, Giovanni; Sanna, Gavino
2018-06-01
The aim of this work was to measure the physico-chemical and the colorimetric parameters of ovaries from Mugil cephalus caught in the Tortolì lagoon (South-East coast of Sardinia) along the steps of the manufacturing process of Bottarga, together with the rheological parameters of the final product. A lowering of all CIELab coordinates (lightness, redness and yellowness) was observed during the manufacture process. All CIELab parameters were used to build a Linear Discriminant Analysis (LDA) predictive model able to determine in real time if the roes had been subdued to a freezing process, with a success in prediction of 100%. This model could be used to identify the origin of the roes, since only the imported ones are frozen. The major changes of all the studied parameters (p < 0.05) were noted in the drying step rather than in the salting step. After processing, Bottarga was characterized by a pH value of 5.46 (CV = 2.8) and a moisture content of 25% (CV = 8), whereas the typical per cent amounts of proteins, fat and NaCl, calculated as a percentage on the dried weight, were 56 (CV = 2), 34 (CV = 3) and 3.6 (CV = 17), respectively. The physical chemical changes of the roes during the manufacturing process were consistent for moisture, which decreased by 28%, whereas the protein and the fat contents on the dried weight got respectively lower of 3% and 2%. NaCl content increased by 3.1%. Principal Component Analyses (PCA) were also performed on all data to establish trends and relationships among all parameters. Hardness and consistency of Bottarga were negatively correlated with the moisture content (r = -0.87 and r = -0.88, respectively), while its adhesiveness was negatively correlated with the fat content (r = -0.68). Copyright © 2018. Published by Elsevier Ltd.
Ciesielski, Peter N.; Crowley, Michael F.; Nimlos, Mark R.; ...
2014-12-09
Biomass exhibits a complex microstructure of directional pores that impact how heat and mass are transferred within biomass particles during conversion processes. However, models of biomass particles used in simulations of conversion processes typically employ oversimplified geometries such as spheres and cylinders and neglect intraparticle microstructure. In this study, we develop 3D models of biomass particles with size, morphology, and microstructure based on parameters obtained from quantitative image analysis. We obtain measurements of particle size and morphology by analyzing large ensembles of particles that result from typical size reduction methods, and we delineate several representative size classes. Microstructural parameters, includingmore » cell wall thickness and cell lumen dimensions, are measured directly from micrographs of sectioned biomass. A general constructive solid geometry algorithm is presented that produces models of biomass particles based on these measurements. Next, we employ the parameters obtained from image analysis to construct models of three different particle size classes from two different feedstocks representing a hardwood poplar species ( Populus tremuloides, quaking aspen) and a softwood pine ( Pinus taeda, loblolly pine). Finally, we demonstrate the utility of the models and the effects explicit microstructure by performing finite-element simulations of intraparticle heat and mass transfer, and the results are compared to similar simulations using traditional simplified geometries. In conclusion, we show how the behavior of particle models with more realistic morphology and explicit microstructure departs from that of spherical models in simulations of transport phenomena and that species-dependent differences in microstructure impact simulation results in some cases.« less
Interdiffusion of Polycarbonate in Fused Deposition Modeling Welds
NASA Astrophysics Data System (ADS)
Seppala, Jonathan; Forster, Aaron; Satija, Sushil; Jones, Ronald; Migler, Kalman
2015-03-01
Fused deposition modeling (FDM), a now common and inexpensive additive manufacturing method, produces 3D objects by extruding molten polymer layer-by-layer. Compared to traditional polymer processing methods (injection, vacuum, and blow molding), FDM parts have inferior mechanical properties, surface finish, and dimensional stability. From a polymer processing point of view the polymer-polymer weld between each layer limits the mechanical strength of the final part. Unlike traditional processing methods, where the polymer is uniformly melted and entangled, FDM welds are typically weaker due to the short time available for polymer interdiffusion and entanglement. To emulate the FDM process thin film bilayers of polycarbonate/d-polycarbonate were annealed using scaled times and temperatures accessible in FDM. Shift factors from Time-Temperature Superposition, measured by small amplitude oscillatory shear, were used to calculate reasonable annealing times (min) at temperatures below the actual extrusion temperature. The extent of interdiffusion was then measured using neutron reflectivity. Analogous specimens were prepared to characterize the mechanical properties. FDM build parameters were then related to interdiffusion between welded layers and mechanical properties. Understating the relationship between build parameters, interdiffusion, and mechanical strength will allow FDM users to print stronger parts in an intelligent manner rather than using trial-and-error and build parameter lock-in.
Results of scatterometer systems analysis for NASA/MSC Earth observation sensor evaluation program
NASA Technical Reports Server (NTRS)
Krishen, K.; Vlahos, N.; Brandt, O.; Graybeal, G.
1970-01-01
A systems evaluation of the 13.3 GHz scatterometer system is presented. The effects of phase error between the scatterometer channels, antenna pattern deviations, aircraft attitude deviations, environmental changes, and other related factors such as processing errors, system repeatability, and propeller modulation, are established. Furthermore, the reduction in system errors and calibration improvement is investigated by taking into account these parameter deviations. Typical scatterometer data samples are presented.
Multi-Temporal Analysis of Landsat Imagery for Bathymetry.
1983-05-01
this data set, typical results obtained when these data were used to implement proposed procedures, an interpretation of these analyses, and based...warping, etc.) have been carried out * as described in section 3.4 and the DIPS operator manuals . For each date * the best available parameter...1982. 5. Digital Image Processing System User’s Manual DBA Systems, Inc., Under Contract DMA800-78-C-0101, 8 November 1979. 6. Naylor, L.D. Status of
Simulation of plasma loading of high-pressure RF cavities
NASA Astrophysics Data System (ADS)
Yu, K.; Samulyak, R.; Yonehara, K.; Freemire, B.
2018-01-01
Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have been performed in the range of parameters typical for practical muon cooling channels.
Significant locations in auxiliary data as seeds for typical use cases of point clustering
NASA Astrophysics Data System (ADS)
Kröger, Johannes
2018-05-01
Random greedy clustering and grid-based clustering are highly susceptible by their initial parameters. When used for point data clustering in maps they often change the apparent distribution of the underlying data. We propose a process that uses precomputed weighted seed points for the initialization of clusters, for example from local maxima in population density data. Exemplary results from the clustering of a dataset of petrol stations are presented.
Diffusion and Interface Effects during Preparation of All-Solid Microstructured Fibers
Jens, Kobelke; Jörg, Bierlich; Katrin, Wondraczek; Claudia, Aichele; Zhiwen, Pan; Sonja, Unger; Kay, Schuster; Hartmut, Bartelt
2014-01-01
All-solid microstructured optical fibers (MOF) allow the realization of very flexible optical waveguide designs. They are prepared by stacking of doped silica rods or canes in complex arrangements. Typical dopants in silica matrices are germanium and phosphorus to increase the refractive index (RI), or boron and fluorine to decrease the RI. However, the direct interface contact of stacking elements often causes interrelated chemical reactions or evaporation during thermal processing. The obtained fiber structures after the final drawing step thus tend to deviate from the targeted structure risking degrading their favored optical functionality. Dopant profiles and design parameters (e.g., the RI homogeneity of the cladding) are controlled by the combination of diffusion and equilibrium conditions of evaporation reactions. We show simulation results of diffusion and thermal dissociation in germanium and fluorine doped silica rod arrangements according to the monitored geometrical disturbances in stretched canes or drawn fibers. The paper indicates geometrical limits of dopant structures in sub-µm-level depending on the dopant concentration and the thermal conditions during the drawing process. The presented results thus enable an optimized planning of the preform parameters avoiding unwanted alterations in dopant concentration profiles or in design parameters encountered during the drawing process. PMID:28788219
Diffusion and Interface Effects during Preparation of All-Solid Microstructured Fibers.
Jens, Kobelke; Jörg, Bierlich; Katrin, Wondraczek; Claudia, Aichele; Zhiwen, Pan; Sonja, Unger; Kay, Schuster; Hartmut, Bartelt
2014-09-25
All-solid microstructured optical fibers (MOF) allow the realization of very flexible optical waveguide designs. They are prepared by stacking of doped silica rods or canes in complex arrangements. Typical dopants in silica matrices are germanium and phosphorus to increase the refractive index (RI), or boron and fluorine to decrease the RI. However, the direct interface contact of stacking elements often causes interrelated chemical reactions or evaporation during thermal processing. The obtained fiber structures after the final drawing step thus tend to deviate from the targeted structure risking degrading their favored optical functionality. Dopant profiles and design parameters (e.g., the RI homogeneity of the cladding) are controlled by the combination of diffusion and equilibrium conditions of evaporation reactions. We show simulation results of diffusion and thermal dissociation in germanium and fluorine doped silica rod arrangements according to the monitored geometrical disturbances in stretched canes or drawn fibers. The paper indicates geometrical limits of dopant structures in sub-µm-level depending on the dopant concentration and the thermal conditions during the drawing process. The presented results thus enable an optimized planning of the preform parameters avoiding unwanted alterations in dopant concentration profiles or in design parameters encountered during the drawing process.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Matero, Sanni; van Den Berg, Frans; Poutiainen, Sami; Rantanen, Jukka; Pajander, Jari
2013-05-01
The manufacturing of tablets involves many unit operations that possess multivariate and complex characteristics. The interactions between the material characteristics and process related variation are presently not comprehensively analyzed due to univariate detection methods. As a consequence, current best practice to control a typical process is to not allow process-related factors to vary i.e. lock the production parameters. The problem related to the lack of sufficient process understanding is still there: the variation within process and material properties is an intrinsic feature and cannot be compensated for with constant process parameters. Instead, a more comprehensive approach based on the use of multivariate tools for investigating processes should be applied. In the pharmaceutical field these methods are referred to as Process Analytical Technology (PAT) tools that aim to achieve a thorough understanding and control over the production process. PAT includes the frames for measurement as well as data analyzes and controlling for in-depth understanding, leading to more consistent and safer drug products with less batch rejections. In the optimal situation, by applying these techniques, destructive end-product testing could be avoided. In this paper the most prominent multivariate data analysis measuring tools within tablet manufacturing and basic research on operations are reviewed. Copyright © 2013 Wiley Periodicals, Inc.
Direct injection analysis of fatty and resin acids in papermaking process waters by HPLC/MS.
Valto, Piia; Knuutinen, Juha; Alén, Raimo
2011-04-01
A novel HPLC-atmospheric pressure chemical ionization/MS (HPLC-APCI/MS) method was developed for the rapid analysis of selected fatty and resin acids typically present in papermaking process waters. A mixture of palmitic, stearic, oleic, linolenic, and dehydroabietic acids was separated by a commercial HPLC column (a modified stationary C(18) phase) using gradient elution with methanol/0.15% formic acid (pH 2.5) as a mobile phase. The internal standard (myristic acid) method was used to calculate the correlation coefficients and in the quantitation of the results. In the thorough quality parameters measurement, a mixture of these model acids in aqueous media as well as in six different paper machine process waters was quantitatively determined. The measured quality parameters, such as selectivity, linearity, precision, and accuracy, clearly indicated that, compared with traditional gas chromatographic techniques, the simple method developed provided a faster chromatographic analysis with almost real-time monitoring of these acids. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SOA formation by biogenic and carbonyl compounds: data evaluation and application.
Ervens, Barbara; Kreidenweis, Sonia M
2007-06-01
The organic fraction of atmospheric aerosols affects the physical and chemical properties of the particles and their role in the climate system. Current models greatly underpredict secondary organic aerosol (SOA) mass. Based on a compilation of literature studies that address SOA formation, we discuss different parameters that affect the SOA formation efficiency of biogenic compounds (alpha-pinene, isoprene) and aliphatic aldehydes (glyoxal, hexanal, octanal, hexadienal). Applying a simple model, we find that the estimated SOA mass after one week of aerosol processing under typical atmospheric conditions is increased by a few microg m(-3) (low NO(x) conditions). Acid-catalyzed reactions can create > 50% more SOA mass than processes under neutral conditions; however, other parameters such as the concentration ratio of organics/NO(x), relative humidity, and absorbing mass are more significant. The assumption of irreversible SOA formation not limited by equilibrium in the particle phase or by depletion of the precursor leads to unrealistically high SOA masses for some of the assumptions we made (surface vs volume controlled processes).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
NASA Astrophysics Data System (ADS)
Haas, Edwin; Klatt, Steffen; Kraus, David; Werner, Christian; Ruiz, Ignacio Santa Barbara; Kiese, Ralf; Butterbach-Bahl, Klaus
2014-05-01
Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional and national scales and are outlined as the most advanced methodology (Tier 3) for national emission inventory in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems like arable land and grasslands and are thus thought to be widely applicable at various spatial and temporal scales. The high complexity of ecosystem processes mirrored by such models requires a large number of model parameters. Many of those parameters are lumped parameters describing simultaneously the effect of environmental drivers on e.g. microbial community activity and individual processes. Thus, the precise quantification of true parameter states is often difficult or even impossible. As a result model uncertainty is not solely originating from input uncertainty but also subject to parameter-induced uncertainty. In this study we quantify regional parameter-induced model uncertainty on nitrous oxide (N2O) emissions and nitrate (NO3) leaching from arable soils of Saxony (Germany) using the biogeochemical model LandscapeDNDC. For this we calculate a regional inventory using a joint parameter distribution for key parameters describing microbial C and N turnover processes as obtained by a Bayesian calibration study. We representatively sampled 400 different parameter vectors from the discrete joint parameter distribution comprising approximately 400,000 parameter combinations and used these to calculate 400 individual realizations of the regional inventory. The spatial domain (represented by 4042 polygons) is set up with spatially explicit soil and climate information and a region-typical 3-year crop rotation consisting of winter wheat, rape- seed, and winter barley. Average N2O emission from arable soils in the state of Saxony across all 400 realizations was 1.43 ± 1.25 [kg N / ha] with a median value of 1.05 [kg N / ha]. Using the default IPCC emission factor approach (Tier 1) for direct emissions reveal a higher average N2O emission of 1.51 [kg N / ha] due to fertilizer use. In the regional uncertainty quantification the 20% likelihood range for N2O emissions is 0.79 - 1.37 [kg N / ha] (50% likelihood: 0.46 - 2.05 [kg N / ha]; 90% likelihood: 0.11 - 4.03 [kg N / ha]). Respective quantities were calculated for nitrate leaching. The method has proven its applicability to quantify parameter-induced uncertainty of simulated regional greenhouse gas emission and nitrate leaching inventories using process based biogeochemical models.
Lulé, Dorothée; Schulze, Ulrike M E; Bauer, Kathrin; Schöll, Friederike; Müller, Sabine; Fladung, Anne-Katharina; Uttner, Ingo
2014-06-01
Psychopathological changes and dysfunction in emotion processing have been described for anorexia nervosa (AN). Yet, findings are applicable to adult patients only. Furthermore, potential for discriminative power in clinical practice in relation to clinical parameters has to be discussed. The aim of this study was to investigate psychopathology and emotional face processing in adolescent female patients with AN. In a sample of 15 adolescent female patients with AN (16.2 years, SD ± 1.26) and 15 age and sex matched controls we assessed alexithymia, depression, anxiety and empathy in addition to emotion labelling and social information processing. AN patients had significantly higher alexithymia, higher levels of depression, and state and trait anxiety compared to controls. There was a trend for a lower ability to recognize disgust. Happiness as a positive emotion was recognized better. All facial expressions were recognized significantly faster by AN patients. Associations of pathological eating behaviour and trait anxiety were seen. In accordance with the stress reduction hypothesis, typical psychopathology of alexithymia, anxiety and depression is prevalent in female adolescent AN patients. It is present detached from physical stability. Pathogenesis of AN is multifactorial and already fully present in adolescence. An additional reinforcement process can be discussed. For clinical practice, those parameters might have a better potential for early prognostic factors related to AN than physical parameters and possible implication for intervention is given.
Jeon, Ju Hyeong; Bhamidipati, Manjari; Sridharan, BanuPriya; Scurto, Aaron M.; Berkland, Cory J.; Detamore, Michael S.
2015-01-01
Microsphere-based polymeric tissue-engineered scaffolds offer the advantage of shape-specific constructs with excellent spatiotemporal control and interconnected porous structures. The use of these highly versatile scaffolds requires a method to sinter the discrete microspheres together into a cohesive network, typically with the use of heat or organic solvents. We previously introduced subcritical CO2 as a sintering method for microsphere-based scaffolds; here we further explored the effect of processing parameters. Gaseous or subcritical CO2 was used for making the scaffolds, and various pressures, ratios of lactic acid to glycolic acid in poly(lactic acid-co-glycolic acid), and amounts of NaCl particles were explored. By changing these parameters, scaffolds with different mechanical properties and morphologies were prepared. The preferred range of applied subcritical CO2 was 15–25 bar. Scaffolds prepared at 25 bar with lower lactic acid ratios and without NaCl particles had a higher stiffness, while the constructs made at 15 bar, lower glycolic acid content, and with salt granules had lower elastic moduli. Human umbilical cord mesenchymal stromal cells (hUCMSCs) seeded on the scaffolds demonstrated that cells penetrate the scaffolds and remain viable. Overall, the study demonstrated the dependence of the optimal CO2 sintering parameters on the polymer and conditions, and identified desirable CO2 processing parameters to employ in the sintering of microsphere-based scaffolds as a more benign alternative to heat-sintering or solvent-based sintering methods. PMID:23115065
A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Van der Tol, C.; Berry, J. A.
2014-12-01
Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.
Particle Bonding Mechanism in Cold Gas Dynamic Spray: A Three-Dimensional Approach
NASA Astrophysics Data System (ADS)
Zhu, Lin; Jen, Tien-Chien; Pan, Yen-Ting; Chen, Hong-Sheng
2017-12-01
Cold gas dynamic spray (CGDS) is a surface coating process that uses highly accelerated particles to form the surface coating. In the CGDS process, metal particles with a diameter of 1-50 µm are carried by a gas stream at high pressure (typically 20-30 atm) through a de Laval-type nozzle to achieve supersonic velocity upon impact onto the substrate. Typically, the impact velocity ranges between 300 and 1200 m/s in the CGDS process. When the particle is accelerated to its critical velocity, which is defined as the minimum in-flight velocity at which it can deposit on the substrate, adiabatic shear instabilities will occur. Herein, to ascertain the critical velocities of different particle sizes on the bonding efficiency in CGDS process, three-dimensional numerical simulations of single particle deposition process were performed. In the CGDS process, one of the most important parameters which determine the bonding strength with the substrate is particle impact temperature. It is hypothesized that the particle will bond to the substrate when the particle's impacting velocity surpasses the critical velocity, at which the interface can achieve 60% of the melting temperature of the particle material (Ref 1, 2). Therefore, critical velocity should be a main parameter on the coating quality. Note that the particle critical velocity is determined not only by its size, but also by its material properties. This study numerically investigates the critical velocity for the particle deposition process in CGDS. In the present numerical analysis, copper (Cu) was chosen as particle material and aluminum (Al) as substrate material. The impacting velocities were selected between 300 and 800 m/s increasing in steps of 100 m/s. The simulation result reveals temporal and spatial interfacial temperature distribution and deformation between particle(s) and substrate. Finally, a comparison is carried out between the computed results and experimental data.
NASA Astrophysics Data System (ADS)
Ugon, B.; Nandong, J.; Zang, Z.
2017-06-01
The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.
Infrared thermography of welding zones produced by polymer extrusion additive manufacturing✩
Seppala, Jonathan E.; Migler, Kalman D.
2016-01-01
In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters. PMID:29167755
Infrared thermography of welding zones produced by polymer extrusion additive manufacturing.
Seppala, Jonathan E; Migler, Kalman D
2016-10-01
In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters.
Hupp, C.R.; Pierce, Aaron R.; Noe, G.B.
2009-01-01
Human alterations along stream channels and within catchments have affected fluvial geomorphic processes worldwide. Typically these alterations reduce the ecosystem services that functioning floodplains provide; in this paper we are concerned with the sediment and associated material trapping service. Similarly, these alterations may negatively impact the natural ecology of floodplains through reductions in suitable habitats, biodiversity, and nutrient cycling. Dams, stream channelization, and levee/canal construction are common human alterations along Coastal Plain fluvial systems. We use three case studies to illustrate these alterations and their impacts on floodplain geomorphic and ecological processes. They include: 1) dams along the lower Roanoke River, North Carolina, 2) stream channelization in west Tennessee, and 3) multiple impacts including canal and artificial levee construction in the central Atchafalaya Basin, Louisiana. Human alterations typically shift affected streams away from natural dynamic equilibrium where net sediment deposition is, approximately, in balance with net erosion. Identification and understanding of critical fluvial parameters (e.g., stream gradient, grain-size, and hydrography) and spatial and temporal sediment deposition/erosion process trajectories should facilitate management efforts to retain and/or regain important ecosystem services. ?? 2009, The Society of Wetland Scientists.
NASA Astrophysics Data System (ADS)
Gedalin, M.; Liverts, M.; Balikhin, M. A.
2008-05-01
Field-aligned and gyrophase bunched ion beams are observed in the foreshock of the Earth bow shock. One of the mechanisms proposed for their production is non-specular reflection at the shock front. We study the distributions which are formed at the stationary quasi-perpendicular shock front within the same process which is responsible for the generation of reflected ions and transmitted gyrating ions. The test particle motion analysis in a model shock allows one to identify the parameters which control the efficiency of the process and the features of the escaping ion distribution. These parameters are: the angle between the shock normal and the upstream magnetic field, the ratio of the ion thermal velocity to the flow velocity upstream, and the cross-shock potential. A typical distribution of escaping ions exhibits a bimodal pitch angle distribution (in the plasma rest frame).
Methodology for the systems engineering process. Volume 3: Operational availability
NASA Technical Reports Server (NTRS)
Nelson, J. H.
1972-01-01
A detailed description and explanation of the operational availability parameter is presented. The fundamental mathematical basis for operational availability is developed, and its relationship to a system's overall performance effectiveness is illustrated within the context of identifying specific availability requirements. Thus, in attempting to provide a general methodology for treating both hypothetical and existing availability requirements, the concept of an availability state, in conjunction with the more conventional probability-time capability, is investigated. In this respect, emphasis is focused upon a balanced analytical and pragmatic treatment of operational availability within the system design process. For example, several applications of operational availability to typical aerospace systems are presented, encompassing the techniques of Monte Carlo simulation, system performance availability trade-off studies, analytical modeling of specific scenarios, as well as the determination of launch-on-time probabilities. Finally, an extensive bibliography is provided to indicate further levels of depth and detail of the operational availability parameter.
NASA Astrophysics Data System (ADS)
Li, Chenguang; Yang, Xianjun
2016-10-01
The Magnetized Plasma Fusion Reactor concept is proposed as a magneto-inertial fusion approach based on the target plasma created through the collision merging of two oppositely translating field reversed configuration plasmas, which is then compressed by the imploding liner driven by the pulsed-power driver. The target creation process is described by a two-dimensional magnetohydrodynamics model, resulting in the typical target parameters. The implosion process and the fusion reaction are modeled by a simple zero-dimensional model, taking into account the alpha particle heating and the bremsstrahlung radiation loss. The compression on the target can be 2D cylindrical or 2.4D with the additive axial contraction taken into account. The dynamics of the liner compression and fusion burning are simulated and the optimum fusion gain and the associated target parameters are predicted. The scientific breakeven could be achieved at the optimized conditions.
Black carbon aerosol size in snow.
Schwarz, J P; Gao, R S; Perring, A E; Spackman, J R; Fahey, D W
2013-01-01
The effect of anthropogenic black carbon (BC) aerosol on snow is of enduring interest due to its consequences for climate forcing. Until now, too little attention has been focused on BC's size in snow, an important parameter affecting BC light absorption in snow. Here we present first observations of this parameter, revealing that BC can be shifted to larger sizes in snow than are typically seen in the atmosphere, in part due to the processes associated with BC removal from the atmosphere. Mie theory analysis indicates a corresponding reduction in BC absorption in snow of 40%, making BC size in snow the dominant source of uncertainty in BC's absorption properties for calculations of BC's snow albedo climate forcing. The shift reduces estimated BC global mean snow forcing by 30%, and has scientific implications for our understanding of snow albedo and the processing of atmospheric BC aerosol in snowfall.
Optimally designing games for behavioural research
Rafferty, Anna N.; Zaharia, Matei; Griffiths, Thomas L.
2014-01-01
Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision. PMID:25002821
A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.
We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less
Birnhack, Liat; Nir, Oded; Telzhenski, Marina; Lahav, Ori
2015-01-01
Deliberate struvite (MgNH4PO4) precipitation from wastewater streams has been the topic of extensive research in the last two decades and is expected to gather worldwide momentum in the near future as a P-reuse technique. A wide range of operational alternatives has been reported for struvite precipitation, including the application of various Mg(II) sources, two pH elevation techniques and several Mg:P ratios and pH values. The choice of each operational parameter within the struvite precipitation process affects process efficiency, the overall cost and also the choice of other operational parameters. Thus, a comprehensive simulation program that takes all these parameters into account is essential for process design. This paper introduces a systematic decision-supporting tool which accepts a wide range of possible operational parameters, including unconventional Mg(II) sources (i.e. seawater and seawater nanofiltration brines). The study is supplied with a free-of-charge computerized tool (http://tx.technion.ac.il/~agrengn/agr/Struvite_Program.zip) which links two computer platforms (Python and PHREEQC) for executing thermodynamic calculations according to predefined kinetic considerations. The model can be (inter alia) used for optimizing the struvite-fluidized bed reactor process operation with respect to P removal efficiency, struvite purity and economic feasibility of the chosen alternative. The paper describes the algorithm and its underlying assumptions, and shows results (i.e. effluent water quality, cost breakdown and P removal efficiency) of several case studies consisting of typical wastewaters treated at various operational conditions.
Fabrication of boron sputter targets
Makowiecki, Daniel M.; McKernan, Mark A.
1995-01-01
A process for fabricating high density boron sputtering targets with sufficient mechanical strength to function reliably at typical magnetron sputtering power densities and at normal process parameters. The process involves the fabrication of a high density boron monolithe by hot isostatically compacting high purity (99.9%) boron powder, machining the boron monolithe into the final dimensions, and brazing the finished boron piece to a matching boron carbide (B.sub.4 C) piece, by placing aluminum foil there between and applying pressure and heat in a vacuum. An alternative is the application of aluminum metallization to the back of the boron monolithe by vacuum deposition. Also, a titanium based vacuum braze alloy can be used in place of the aluminum foil.
Warehouse stocking optimization based on dynamic ant colony genetic algorithm
NASA Astrophysics Data System (ADS)
Xiao, Xiaoxu
2018-04-01
In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.
Atmospheric-pressure electric discharge as an instrument of chemical activation of water solutions
NASA Astrophysics Data System (ADS)
Rybkin, V. V.; Shutov, D. A.
2017-11-01
Results of experimental studies and numerical simulations of physicochemical characteristics of plasmas generated in different types of atmospheric-pressure discharges (pulsed streamer corona, gliding electric arc, dielectric barrier discharge, glow-discharge electrolysis, diaphragmatic discharge, and dc glow discharge) used to initiate various chemical processes in water solutions are analyzed. Typical reactor designs are considered. Data on the power supply characteristics, plasma electron parameters, gas temperatures, and densities of active particles in different types of discharges excited in different gases and their dependences on the external parameters of discharges are presented. The chemical composition of active particles formed in water is described. Possible mechanisms of production and loss of plasma particles are discussed.
Solar oxidation and removal of arsenic--Key parameters for continuous flow applications.
Gill, L W; O'Farrell, C
2015-12-01
Solar oxidation to remove arsenic from water has previously been investigated as a batch process. This research has investigated the kinetic parameters for the design of a continuous flow solar reactor to remove arsenic from contaminated groundwater supplies. Continuous flow recirculated batch experiments were carried out under artificial UV light to investigate the effect of different parameters on arsenic removal efficiency. Inlet water arsenic concentrations of up to 1000 μg/L were reduced to below 10 μg/L requiring 12 mg/L iron after receiving 12 kJUV/L radiation. Citrate however was somewhat surprisingly found to promote a detrimental effect on the removal process in the continuous flow reactor studies which is contrary to results found in batch scale tests. The impact of other typical water groundwater quality parameters (phosphate and silica) on the process due to their competition with arsenic for photooxidation products revealed a much higher sensitivity to phosphate ions compared to silicate. Other results showed no benefit from the addition of TiO2 photocatalyst but enhanced arsenic removal at higher temperatures up to 40 °C. Overall, these results have indicated the kinetic envelope from which a continuous flow SORAS single pass system could be more confidently designed for a full-scale community groundwater application at a village level. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alconcel, L. N. S.; Fox, P.; Brown, P.; Oddy, T. M.; Lucek, E. L.; Carr, C. M.
2014-07-01
Over the course of more than 10 years in operation, the calibration parameters of the outboard fluxgate magnetometer (FGM) sensors on the four Cluster spacecraft are shown to be remarkably stable. The parameters are refined on the ground during the rigorous FGM calibration process performed for the Cluster Active Archive (CAA). Fluctuations in some parameters show some correlation with trends in the sensor temperature (orbit position). The parameters, particularly the offsets, of the spacecraft 1 (C1) sensor have undergone more long-term drift than those of the other spacecraft (C2, C3 and C4) sensors. Some potentially anomalous calibration parameters have been identified and will require further investigation in future. However, the observed long-term stability demonstrated in this initial study gives confidence in the accuracy of the Cluster magnetic field data. For the most sensitive ranges of the FGM instrument, the offset drift is typically 0.2 nT per year in each sensor on C1 and negligible on C2, C3 and C4.
NASA Astrophysics Data System (ADS)
Alconcel, L. N. S.; Fox, P.; Brown, P.; Oddy, T. M.; Lucek, E. L.; Carr, C. M.
2014-01-01
Over the course of more than ten years in operation, the calibration parameters of the outboard fluxgate magnetometer (FGM) sensors on the four Cluster spacecraft are shown to be remarkably stable. The parameters are refined on the ground during the rigorous FGM calibration process performed for the Cluster Active Archive (CAA). Fluctuations in some parameters show some correlation with trends in the sensor temperature (orbit position). The parameters, particularly the offsets, of the Spacecraft1 (C1) sensor have undergone more long-term drift than those of the other spacecraft (C2, C3 and C4) sensors. Some potentially anomalous calibration parameters have been identified and will require further investigation in future. However, the observed long-term stability demonstrated in this initial study gives confidence in the relative accuracy of the Cluster magnetic field data. For the most sensitive ranges of the FGM instrument, the offset drift is typically 0.2 nT yr-1 in each sensor on C1 and negligible on C2, C3 and C4.
Effects of aged sorption on pesticide leaching to groundwater simulated with PEARL.
Boesten, Jos J T I
2017-01-15
Leaching to groundwater is an important element of the regulatory risk assessment of pesticides in western countries. Including aged sorption in this assessment is relevant because there is ample evidence of this process and because it leads to a decrease in simulated leaching. This work assesses the likely magnitude of this decrease for four groundwater scenarios used for regulatory purpose in the EU (from the UK, Portugal, Austria and Greece) and for ranges of aged-sorption parameters and substance properties using the PEARL model. Three aged-sorption parameters sets were derived from literature, representing approximately 5th, 50th and 95th percentile cases for the magnitude of the effect of aged sorption on leaching concentrations (called S, M and L, respectively). The selection of these percentile cases was based only on the f NE parameter (i.e. the ratio of the aged sorption and the equilibrium sorption coefficients) because leaching was much more affected by the uncertainty in this parameter than by the uncertainty in the desorption rate coefficient of these sites (k d ). For the UK scenario, the annual flux concentration of pesticide leaching at 1m depth decreased by typically a factor of 5, 30 and >1000 for the S, M and L parameter sets, respectively. This decrease by a factor of 30 for the M parameter set appeared to be approximately valid also for the other three scenarios. Decreasing the Freundlich exponent N from 0.9 into 0.7 for the M parameter set, increased this factor of 30 into a factor of typically 1000, considering all four scenarios. The aged-sorption sites were close to their equilibrium conditions during the leaching simulations for two of the four scenarios (for all substances considered and the M parameter set), but this was not the case for the other two scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ceriotti, G.; Porta, G. M.; Geloni, C.; Dalla Rosa, M.; Guadagnini, A.
2017-09-01
We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by Carbonate-Clays Reactions (CCR) in large-scale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a one-dimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows (i) estimating the depth at which the source of gases is located and (ii) quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of (i) the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, (ii) the order of magnitude of the CO2 generation rate that can be associated with CCR processes.
Simulation of plasma loading of high-pressure RF cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, K.; Samulyak, R.; Yonehara, K.
2018-01-11
Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have also been performed in the range of parameters typical for practical muon cooling channels.
Aerodynamic instability: A case history
NASA Technical Reports Server (NTRS)
Eisenmann, R. C.
1985-01-01
The identification, diagnosis, and final correction of complex machinery malfunctions typically require the correlation of many parameters such as mechanical construction, process influence, maintenance history, and vibration response characteristics. The progression is reviewed of field testing, diagnosis, and final correction of a specific machinery instability problem. The case history presented addresses a unique low frequency instability problem on a high pressure barrel compressor. The malfunction was eventually diagnosed as a fluidic mechanism that manifested as an aerodynamic disturbance to the rotor assembly.
Topology Synthesis of Structures Using Parameter Relaxation and Geometric Refinement
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.
2007-01-01
Typically, structural topology optimization problems undergo relaxation of certain design parameters to allow the existence of intermediate variable optimum topologies. Relaxation permits the use of a variety of gradient-based search techniques and has been shown to guarantee the existence of optimal solutions and eliminate mesh dependencies. This Technical Publication (TP) will demonstrate the application of relaxation to a control point discretization of the design workspace for the structural topology optimization process. The control point parameterization with subdivision has been offered as an alternative to the traditional method of discretized finite element design domain. The principle of relaxation demonstrates the increased utility of the control point parameterization. One of the significant results of the relaxation process offered in this TP is that direct manufacturability of the optimized design will be maintained without the need for designer intervention or translation. In addition, it will be shown that relaxation of certain parameters may extend the range of problems that can be addressed; e.g., in permitting limited out-of-plane motion to be included in a path generation problem.
Remote sensing requirements as suggested by watershed model sensitivity analyses
NASA Technical Reports Server (NTRS)
Salomonson, V. V.; Rango, A.; Ormsby, J. P.; Ambaruch, R.
1975-01-01
A continuous simulation watershed model has been used to perform sensitivity analyses that provide guidance in defining remote sensing requirements for the monitoring of watershed features and processes. The results show that out of 26 input parameters having meaningful effects on simulated runoff, 6 appear to be obtainable with existing remote sensing techniques. Of these six parameters, 3 require the measurement of the areal extent of surface features (impervious areas, water bodies, and the extent of forested area), two require the descrimination of land use that can be related to overland flow roughness coefficient or the density of vegetation so as to estimate the magnitude of precipitation interception, and one parameter requires the measurement of distance to get the length over which overland flow typically occurs. Observational goals are also suggested for monitoring such fundamental watershed processes as precipitation, soil moisture, and evapotranspiration. A case study on the Patuxent River in Maryland shows that runoff simulation is improved if recent satellite land use observations are used as model inputs as opposed to less timely topographic map information.
Evolutionary algorithm for vehicle driving cycle generation.
Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott
2011-09-01
Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.
Report on the study of the tax and rate treatment of renewable energy projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, S.W.; Hill, L.J.; Perlack, R.D.
1993-12-01
This study was conducted in response to the requirements of Section 1205 of the Energy Policy Act of 1992 (EPACT), which states: The Secretary (of Energy), in conjunction with State regulatory commissions, shall undertake a study to determine if conventional taxation and ratemaking procedures result in economic barriers to or incentives for renewable energy power plants compared to conventional power plants. The purpose of the study, therefore, is not to compare the cost-effectiveness of different types of renewable and conventional electric generating plants. Rather, it is to determine the relative impact of conventional ratemaking and taxation procedures on the selectionmore » of renewable power plants compared to conventional ones. To make this determination, we quantify the technical and financial parameters of renewable and conventional electric generating technologies, and hold them fixed throughout the study. Then, we vary taxation and ratemaking procedures to determine their effects on the financial criteria that investor-owned electric utilities (IOUs) and nonutility electricity generators (NUGs) use to make technology-adoption decisions. In the planning process of a typical utility, the opposite is usually the case. That is, utilities typically hold ratemaking and taxation procedures constant and look for the least-cost mix of resources, varying the values of engineering and financial parameters of generating plants in the process.« less
Association between Blood Omega-3 Index and Cognition in Typically Developing Dutch Adolescents
van der Wurff, Inge S. M.; von Schacky, Clemens; Berge, Kjetil; Zeegers, Maurice P.; Kirschner, Paul A.; de Groot, Renate H. M.
2016-01-01
The impact of omega-3 long-chain polyunsaturated fatty acids (LCPUFAs) on cognition is heavily debated. In the current study, the possible association between omega-3 LCPUFAs in blood and cognitive performance of 266 typically developing adolescents aged 13–15 years is investigated. Baseline data from Food2Learn, a double-blind and randomized placebo controlled krill oil supplementation trial in typically developing adolescents, were used for the current study. The Omega-3 Index was determined with blood from a finger prick. At baseline, participants finished a neuropsychological test battery consisting of the Letter Digit Substitution Test (LDST), D2 test of attention, Digit Span Forward and Backward, Concept Shifting Test and Stroop test. Data were analyzed with multiple regression analyses with correction for covariates. The average Omega-3 Index was 3.83% (SD 0.60). Regression analyses between the Omega-3 Index and the outcome parameters revealed significant associations with scores on two of the nine parameters. The association between the Omega-3 Index and both scores on the LDST (β = 0.136 and p = 0.039), and the number of errors of omission on the D2 (β = −0.053 and p = 0.007). This is a possible indication for a higher information processing speed and less impulsivity in those with a higher Omega-3 Index. PMID:26729157
NASA Technical Reports Server (NTRS)
Smith, T. M.; Kloesel, M. F.; Sudbrack, C. K.
2017-01-01
Powder-bed additive manufacturing processes use fine powders to build parts layer by layer. For selective laser melted (SLM) Alloy 718, the powders that are available off-the-shelf are in the 10-45 or 15-45 micron size range. A comprehensive investigation of sixteen powders from these typical ranges and two off-nominal-sized powders is underway to gain insight into the impact of feedstock on processing, durability and performance of 718 SLM space-flight hardware. This talk emphasizes an aspect of this work: the impact of powder variability on the microstructure and defects observed in the as-fabricated and full heated material, where lab-scale components were built using vendor recommended parameters. These typical powders exhibit variation in composition, percentage of fines, roughness, morphology and particle size distribution. How these differences relate to the melt-pool size, porosity, grain structure, precipitate distributions, and inclusion content will be presented and discussed in context of build quality and powder acceptance.
NASA Astrophysics Data System (ADS)
Cavaliere, P.; Perrone, A.; Silvello, A.
2014-10-01
Cold spray is a coating technology based on aerodynamics and high-speed impact dynamics. In this process, spray particles (usually 1-50 μm in diameter) are accelerated to a high velocity (typically 300-1200 m/s) by a high-speed gas (pre-heated air, nitrogen, or helium) flow that is generated through a convergent-divergent de Laval-type nozzle. A coating is formed through the intensive plastic deformation of particles impacting on a substrate at a temperature below the melting point of the spray material. In the present paper the main processing parameters affecting the microstructural and mechanical behavior of metal-metal cold spray deposits are described. The effect of process parameters on grain refinement and mechanical properties were analyzed for composite particles of Al-Al2O3, Ni-BN, Cu-Al2O3, and Co-SiC. The properties of the formed nanocomposites were compared with those of the parent materials sprayed under the same conditions. The process conditions, leading to a strong grain refinement with an acceptable level of the deposit mechanical properties such as porosity and adhesion strength, are discussed.
NASA Astrophysics Data System (ADS)
Jhirad, D. J.; Mubayi, V.; Weingart, J.
1981-08-01
The technical and economic evidence is reviewed for solar industrial process heat, highlighting the fact that financial parameters such as tax credits and depreciation allowance play a very large role in determining the economic competitiveness of solar investments. An analysis of the energy (and oil) consumed in providing industrial process heat in a number of selected developing countries is presented. Solar industrial process heat technology is discussed including the operating experience of several demonstration plants in the US Solar ponds are also described briefly. A financial and economic analysis of solar industrial process heat systems under different assumptions on future oil prices and various financial parameters is given. Financial analyses are summarized for a solar industrial process heat retrofit of a brewery in Zimbabwe and a high efficiency system operating in financial conditions typical of the US and a number of other industrialized nations. A set of recommended policy actions for countries wishing to enhance the commercial feasibility of renewable energy technologies in the commercial and industrial sections is presented.
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
TOPICAL REVIEW: Physics and phenomena in pulsed magnetrons: an overview
NASA Astrophysics Data System (ADS)
Bradley, J. W.; Welzel, T.
2009-05-01
This paper reviews the contribution made to the observation and understanding of the basic physical processes occurring in an important type of magnetized low-pressure plasma discharge, the pulsed magnetron. In industry, these plasma sources are operated typically in reactive mode where a cathode is sputtered in the presence of both chemically reactive and noble gases typically with the power modulated in the mid-frequency (5-350 kHz) range. In this review, we concentrate mostly, however, on physics-based studies carried out on magnetron systems operated in argon. This simplifies the physical-chemical processes occurring and makes interpretation of the observations somewhat easier. Since their first recorded use in 1993 there have been more than 300 peer-reviewed paper publications concerned with pulsed magnetrons, dealing wholly or in part with fundamental observations and basic studies. The fundamentals of these plasmas and the relationship between the plasma parameters and thin film quality regularly have whole sessions at international conferences devoted to them; however, since many different types of magnetron geometries have been used worldwide with different operating parameters the important results are often difficult to tease out. For example, we find the detailed observations of the plasma parameter (particle density and temperature) evolution from experiment to experiment are at best difficult to compare and at worst contradictory. We review in turn five major areas of studies which are addressed in the literature and try to draw out the major results. These areas are: fast electron generation, bulk plasma heating, short and long-term plasma parameter rise and decay rates, plasma potential modulation and transient phenomena. The influence of these phenomena on the ion energy and ion energy flux at the substrate is discussed. This review, although not exhaustive, will serve as a useful guide for more in-depth investigations using the referenced literature and also hopefully as an inspiration for future studies.
Diabatic models with transferrable parameters for generalized chemical reactions
NASA Astrophysics Data System (ADS)
Reimers, Jeffrey R.; McKemmish, Laura K.; McKenzie, Ross H.; Hush, Noel S.
2017-05-01
Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical parameter space, and (iv), demonstrating the types of chemical reactions that may be suitable for exploitation as a chemical qubit in some quantum information processor.
NASA Astrophysics Data System (ADS)
Chandra, Shubham; Rao, Balkrishna C.
2017-06-01
The process of laser engineered net shaping (LENSTM) is an additive manufacturing technique that employs the coaxial flow of metallic powders with a high-power laser to form a melt pool and the subsequent deposition of the specimen on a substrate. Although research done over the past decade on the LENSTM processing of alloys of steel, titanium, nickel and other metallic materials typically reports superior mechanical properties in as-deposited specimens, when compared to the bulk material, there is anisotropy in the mechanical properties of the melt deposit. The current study involves the development of a numerical model of the LENSTM process, using the principles of computational fluid dynamics (CFD), and the subsequent prediction of the volume fraction of equiaxed grains to predict process parameters required for the deposition of workpieces with isotropy in their properties. The numerical simulation is carried out on ANSYS-Fluent, whose data on thermal gradient are used to determine the volume fraction of the equiaxed grains present in the deposited specimen. This study has been validated against earlier efforts on the experimental studies of LENSTM for alloys of nickel. Besides being applicable to the wider family of metals and alloys, the results of this study will also facilitate effective process design to improve both product quality and productivity.
Experimental design of a twin-column countercurrent gradient purification process.
Steinebach, Fabian; Ulmer, Nicole; Decker, Lara; Aumann, Lars; Morbidelli, Massimo
2017-04-07
As typical for separation processes, single unit batch chromatography exhibits a trade-off between purity and yield. The twin-column MCSGP (multi-column countercurrent solvent gradient purification) process allows alleviating such trade-offs, particularly in the case of difficult separations. In this work an efficient and reliable procedure for the design of the twin-column MCSGP process is developed. This is based on a single batch chromatogram, which is selected as the design chromatogram. The derived MCSGP operation is not intended to provide optimal performance, but it provides the target product in the selected fraction of the batch chromatogram, but with higher yield. The design procedure is illustrated for the isolation of the main charge isoform of a monoclonal antibody from Protein A eluate with ion-exchange chromatography. The main charge isoform was obtained at a purity and yield larger than 90%. At the same time process related impurities such as HCP and leached Protein A as well as aggregates were at least equally well removed. Additionally, the impact of several design parameters on the process performance in terms of purity, yield, productivity and buffer consumption is discussed. The obtained results can be used for further fine-tuning of the process parameters so as to improve its performance. Copyright © 2017 Elsevier B.V. All rights reserved.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Comparisons of anomalous and collisional radial transport with a continuum kinetic edge code
NASA Astrophysics Data System (ADS)
Bodi, K.; Krasheninnikov, S.; Cohen, R.; Rognlien, T.
2009-05-01
Modeling of anomalous (turbulence-driven) radial transport in controlled-fusion plasmas is necessary for long-time transport simulations. Here the focus is continuum kinetic edge codes such as the (2-D, 2-V) transport version of TEMPEST, NEO, and the code being developed by the Edge Simulation Laboratory, but the model also has wider application. Our previously developed anomalous diagonal transport matrix model with velocity-dependent convection and diffusion coefficients allows contact with typical fluid transport models (e.g., UEDGE). Results are presented that combine the anomalous transport model and collisional transport owing to ion drift orbits utilizing a Krook collision operator that conserves density and energy. Comparison is made of the relative magnitudes and possible synergistic effects of the two processes for typical tokamak device parameters.
NASA Technical Reports Server (NTRS)
Mcgoogan, J. T.; Leitao, C. D.; Wells, W. T.
1975-01-01
The SKYLAB S-193 altimeter altitude results are presented in a concise format for further use and analysis by the scientific community. The altimeter mission and instrumentation is described along with the altimeter processing techniques and values of parameters used for processing. The determination of reference orbits is discussed, and the tracking systems utilized are tabulated. Techniques for determining satellite pointing are presented and a tabulation of pointing for each data mission included. The geographical location, the ocean bottom topography, the altimeter-determined ocean surface topography, and the altimeter automatic gain control history is presented. Some typical applications of this data are suggested.
NASA Technical Reports Server (NTRS)
Peabody, Hume L.
2017-01-01
This presentation is meant to be an overview of the model building process It is based on typical techniques (Monte Carlo Ray Tracing for radiation exchange, Lumped Parameter, Finite Difference for thermal solution) used by the aerospace industry This is not intended to be a "How to Use ThermalDesktop" course. It is intended to be a "How to Build Thermal Models" course and the techniques will be demonstrated using the capabilities of ThermalDesktop (TD). Other codes may or may not have similar capabilities. The General Model Building Process can be broken into four top level steps: 1. Build Model; 2. Check Model; 3. Execute Model; 4. Verify Results.
NASA Astrophysics Data System (ADS)
Liberini, Mariacira; Esposito, Sara; Reshad, Kambitz; Previtali, Barbara; Viola, Marco; Squillace, Antonino
2016-10-01
Every manufacturing process leaves on the surface of the piece a typical "technology signature". In particular, the laser welding leaves a feature at the edge of the weld bead called "undercut". In this work an experimental campaign has been conducted on Ti6Al4V butt joints. In particular a Central Composite Design (CCD) with the central point repeated three times has been investigated. In the CCD there are two factors (power and speed of the fiber laser) and five levels for each factor. This paper deals with the investigation about the correlation between the severity of the undercut and the process parameters of the laser welding. In particular, through the confocal microscopy, the original geometry of the joint was accurately acquired and rebuilt in order to make a FEM model and simulate the mechanical behavior using Ansys14.5. Moreover, response surfaces and level curves were carried out to understand and predict the depth and the width of the undercut starting from the power and the speed of the laser. At last a mathematic and geometry regression was performed in order to find a unique conical curve that interpolates all the different undercuts and that varies its parameters according to the process parameters. It is established that the process with higher speed minimizes and optimizes the undercut in the joints.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry
2017-07-01
Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.
Rare behavior of growth processes via umbrella sampling of trajectories
NASA Astrophysics Data System (ADS)
Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen
2018-03-01
We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.
In Situ Roughness Measurements for the Solar Cell Industry Using an Atomic Force Microscope
González-Jorge, Higinio; Alvarez-Valado, Victor; Valencia, Jose Luis; Torres, Soledad
2010-01-01
Areal roughness parameters always need to be under control in the thin film solar cell industry because of their close relationship with the electrical efficiency of the cells. In this work, these parameters are evaluated for measurements carried out in a typical fabrication area for this industry. Measurements are made using a portable atomic force microscope on the CNC diamond cutting machine where an initial sample of transparent conductive oxide is cut into four pieces. The method is validated by making a comparison between the parameters obtained in this process and in the laboratory under optimal conditions. Areal roughness parameters and Fourier Spectral Analysis of the data show good compatibility and open the possibility to use this type of measurement instrument to perform in situ quality control. This procedure gives a sample for evaluation without destroying any of the transparent conductive oxide; in this way 100% of the production can be tested, so improving the measurement time and rate of production. PMID:22319338
In situ roughness measurements for the solar cell industry using an atomic force microscope.
González-Jorge, Higinio; Alvarez-Valado, Victor; Valencia, Jose Luis; Torres, Soledad
2010-01-01
Areal roughness parameters always need to be under control in the thin film solar cell industry because of their close relationship with the electrical efficiency of the cells. In this work, these parameters are evaluated for measurements carried out in a typical fabrication area for this industry. Measurements are made using a portable atomic force microscope on the CNC diamond cutting machine where an initial sample of transparent conductive oxide is cut into four pieces. The method is validated by making a comparison between the parameters obtained in this process and in the laboratory under optimal conditions. Areal roughness parameters and Fourier Spectral Analysis of the data show good compatibility and open the possibility to use this type of measurement instrument to perform in situ quality control. This procedure gives a sample for evaluation without destroying any of the transparent conductive oxide; in this way 100% of the production can be tested, so improving the measurement time and rate of production.
Biomolecular Force Field Parameterization via Atoms-in-Molecule Electron Density Partitioning.
Cole, Daniel J; Vilseck, Jonah Z; Tirado-Rives, Julian; Payne, Mike C; Jorgensen, William L
2016-05-10
Molecular mechanics force fields, which are commonly used in biomolecular modeling and computer-aided drug design, typically treat nonbonded interactions using a limited library of empirical parameters that are developed for small molecules. This approach does not account for polarization in larger molecules or proteins, and the parametrization process is labor-intensive. Using linear-scaling density functional theory and atoms-in-molecule electron density partitioning, environment-specific charges and Lennard-Jones parameters are derived directly from quantum mechanical calculations for use in biomolecular modeling of organic and biomolecular systems. The proposed methods significantly reduce the number of empirical parameters needed to construct molecular mechanics force fields, naturally include polarization effects in charge and Lennard-Jones parameters, and scale well to systems comprised of thousands of atoms, including proteins. The feasibility and benefits of this approach are demonstrated by computing free energies of hydration, properties of pure liquids, and the relative binding free energies of indole and benzofuran to the L99A mutant of T4 lysozyme.
Microbial ecology of sourdough fermentations: diverse or uniform?
De Vuyst, L; Van Kerrebroeck, S; Harth, H; Huys, G; Daniel, H-M; Weckx, S
2014-02-01
Sourdough is a specific and stressful ecosystem inhabited by yeasts and lactic acid bacteria (LAB), mainly heterofermentative lactobacilli. On the basis of their inocula, three types of sourdough fermentation processes can be distinguished, namely backslopped ones, those initiated with starter cultures, and those initiated with a starter culture followed by backslopping. Typical sourdough LAB species are Lactobacillus fermentum, Lactobacillus paralimentarius, Lactobacillus plantarum, and Lactobacillus sanfranciscensis. Typical sourdough yeast species are Candida humilis, Kazachstania exigua, and Saccharomyces cerevisiae. Whereas region specificity is claimed in the case of artisan backslopped sourdoughs, no clear-cut relationship between a typical sourdough and its associated microbiota can be found, as this is dependent on the sampling, isolation, and identification procedures. Both simple and very complex consortia may occur. Moreover, a series of intrinsic and extrinsic factors may influence the composition of the sourdough microbiota. For instance, an influence of the flour (type, quality status, etc.) and the process parameters (temperature, pH, dough yield, backslopping practices, etc.) occurs. In this way, the presence of Lb. sanfranciscensis during sourdough fermentation depends on specific environmental and technological factors. Also, Triticum durum seems to select for obligately heterofermentative LAB species. Finally, there are indications that the sourdough LAB are of intestinal origin. Copyright © 2013 Elsevier Ltd. All rights reserved.
Study on Adaptive Parameter Determination of Cluster Analysis in Urban Management Cases
NASA Astrophysics Data System (ADS)
Fu, J. Y.; Jing, C. F.; Du, M. Y.; Fu, Y. L.; Dai, P. P.
2017-09-01
The fine management for cities is the important way to realize the smart city. The data mining which uses spatial clustering analysis for urban management cases can be used in the evaluation of urban public facilities deployment, and support the policy decisions, and also provides technical support for the fine management of the city. Aiming at the problem that DBSCAN algorithm which is based on the density-clustering can not realize parameter adaptive determination, this paper proposed the optimizing method of parameter adaptive determination based on the spatial analysis. Firstly, making analysis of the function Ripley's K for the data set to realize adaptive determination of global parameter MinPts, which means setting the maximum aggregation scale as the range of data clustering. Calculating every point object's highest frequency K value in the range of Eps which uses K-D tree and setting it as the value of clustering density to realize the adaptive determination of global parameter MinPts. Then, the R language was used to optimize the above process to accomplish the precise clustering of typical urban management cases. The experimental results based on the typical case of urban management in XiCheng district of Beijing shows that: The new DBSCAN clustering algorithm this paper presents takes full account of the data's spatial and statistical characteristic which has obvious clustering feature, and has a better applicability and high quality. The results of the study are not only helpful for the formulation of urban management policies and the allocation of urban management supervisors in XiCheng District of Beijing, but also to other cities and related fields.
Preliminary Results of Cleaning Process for Lubricant Contamination
NASA Astrophysics Data System (ADS)
Eisenmann, D.; Brasche, L.; Lopez, R.
2006-03-01
Fluorescent penetrant inspection (FPI) is widely used for aviation and other components for surface-breaking crack detection. As with all inspection methods, adherence to the process parameters is critical to the successful detection of defects. Prior to FPI, components are cleaned using a variety of cleaning methods which are selected based on the alloy and the soil types which must be removed. It is also important that the cleaning process not adversely affect the FPI process. There are a variety of lubricants and surface coatings used in the aviation industry which must be removed prior to FPI. To assess the effectiveness of typical cleaning processes on removal of these contaminants, a study was initiated at an airline overhaul facility. Initial results of the cleaning study for lubricant contamination in nickel, titanium and aluminum alloys will be presented.
Adaptive MCMC in Bayesian phylogenetics: an application to analyzing partitioned data in BEAST.
Baele, Guy; Lemey, Philippe; Rambaut, Andrew; Suchard, Marc A
2017-06-15
Advances in sequencing technology continue to deliver increasingly large molecular sequence datasets that are often heavily partitioned in order to accurately model the underlying evolutionary processes. In phylogenetic analyses, partitioning strategies involve estimating conditionally independent models of molecular evolution for different genes and different positions within those genes, requiring a large number of evolutionary parameters that have to be estimated, leading to an increased computational burden for such analyses. The past two decades have also seen the rise of multi-core processors, both in the central processing unit (CPU) and Graphics processing unit processor markets, enabling massively parallel computations that are not yet fully exploited by many software packages for multipartite analyses. We here propose a Markov chain Monte Carlo (MCMC) approach using an adaptive multivariate transition kernel to estimate in parallel a large number of parameters, split across partitioned data, by exploiting multi-core processing. Across several real-world examples, we demonstrate that our approach enables the estimation of these multipartite parameters more efficiently than standard approaches that typically use a mixture of univariate transition kernels. In one case, when estimating the relative rate parameter of the non-coding partition in a heterochronous dataset, MCMC integration efficiency improves by > 14-fold. Our implementation is part of the BEAST code base, a widely used open source software package to perform Bayesian phylogenetic inference. guy.baele@kuleuven.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Fabrication of boron sputter targets
Makowiecki, D.M.; McKernan, M.A.
1995-02-28
A process is disclosed for fabricating high density boron sputtering targets with sufficient mechanical strength to function reliably at typical magnetron sputtering power densities and at normal process parameters. The process involves the fabrication of a high density boron monolithe by hot isostatically compacting high purity (99.9%) boron powder, machining the boron monolithe into the final dimensions, and brazing the finished boron piece to a matching boron carbide (B{sub 4}C) piece, by placing aluminum foil there between and applying pressure and heat in a vacuum. An alternative is the application of aluminum metallization to the back of the boron monolithe by vacuum deposition. Also, a titanium based vacuum braze alloy can be used in place of the aluminum foil. 7 figs.
Geothermal reservoir simulation of hot sedimentary aquifer system using FEFLOW®
NASA Astrophysics Data System (ADS)
Nur Hidayat, Hardi; Gala Permana, Maximillian
2017-12-01
The study presents the simulation of hot sedimentary aquifer for geothermal utilization. Hot sedimentary aquifer (HSA) is a conduction-dominated hydrothermal play type utilizing deep aquifer, which is heated by near normal heat flow. One of the examples of HSA is Bavarian Molasse Basin in South Germany. This system typically uses doublet wells: an injection and production well. The simulation was run for 3650 days of simulation time. The technical feasibility and performance are analysed in regards to the extracted energy from this concept. Several parameters are compared to determine the model performance. Parameters such as reservoir characteristics, temperature information and well information are defined. Several assumptions are also defined to simplify the simulation process. The main results of the simulation are heat period budget or total extracted heat energy, and heat rate budget or heat production rate. Qualitative approaches for sensitivity analysis are conducted by using five parameters in which assigned lower and higher value scenarios.
Scramjet mixing establishment times for a pulse facility
NASA Technical Reports Server (NTRS)
Rogers, R. Clayton; Weidner, Elizabeth H.
1991-01-01
A numerical simulation of the temporally developing flow through a generic scramjet combustor duct is presented for stagnation conditions typical of flight at Mach 13 as produced by a shock tunnel pulse facility. The particular focus is to examine the start up transients and to determine the time required for certain flow parameters to become established. The calculations were made with a Navier-Stokes solver SPARK with temporally relaxing inflow conditions derived from operation of the T4 shock tunnel at the University of Queensland in Australia. Calculations at nominal steady inflow conditions were made for comparison. The generic combustor geometry includes the injection of hydrogen fuel from the base of a centrally located strut. In both cases, the flow was assumed laminar and fuel combustion was not included. The establishment process is presented for viscous parameters in the boundary layer and for parameters related to the fuel mixing.
NASA Astrophysics Data System (ADS)
Yang, Qi; Deng, Bin; Wang, Hongqiang; Qin, Yuliang
2017-07-01
Rotation is one of the typical micro-motions of radar targets. In many cases, rotation of the targets is always accompanied with vibrating interference, and it will significantly affect the parameter estimation and imaging, especially in the terahertz band. In this paper, we propose a parameter estimation method and an image reconstruction method based on the inverse Radon transform, the time-frequency analysis, and its inverse. The method can separate and estimate the rotating Doppler and the vibrating Doppler simultaneously and can obtain high-quality reconstructed images after vibration compensation. In addition, a 322-GHz radar system and a 25-GHz commercial radar are introduced and experiments on rotating corner reflectors are carried out in this paper. The results of the simulation and experiments verify the validity of the methods, which lay a foundation for the practical processing of the terahertz radar.
NASA Astrophysics Data System (ADS)
Haikal Ahmad, M. A.; Zulafif Rahim, M.; Fauzi, M. F. Mohd; Abdullah, Aslam; Omar, Z.; Ding, Songlin; Ismail, A. E.; Rasidi Ibrahim, M.
2018-01-01
Polycrystalline diamond (PCD) is regarded as among the hardest material in the world. Electrical Discharge Machining (EDM) typically used to machine this material because of its non-contact process nature. This investigation was purposely done to compare the EDM performances of PCD when using normal electrode of copper (Cu) and newly proposed graphitization catalyst electrode of copper nickel (CuNi). Two level full factorial design of experiment with 4 center points technique was used to study the influence of main and interaction effects of the machining parameter namely; pulse-on, pulse-off, sparking current, and electrode materials (categorical factor). The paper shows interesting discovery in which the newly proposed electrode presented positive impact to the machining performance. With the same machining parameters of finishing, CuNi delivered more than 100% better in Ra and MRR than ordinary Cu electrode.
Focusing the research agenda for simulation training visual system requirements
NASA Astrophysics Data System (ADS)
Lloyd, Charles J.
2014-06-01
Advances in the capabilities of the display-related technologies with potential uses in simulation training devices continue to occur at a rapid pace. Simultaneously, ongoing reductions in defense spending stimulate the services to push a higher proportion of training into ground-based simulators to reduce their operational costs. These two trends result in increased customer expectations and desires for more capable training devices, while the money available for these devices is decreasing. Thus, there exists an increasing need to improve the efficiency of the acquisition process and to increase the probability that users get the training devices they need at the lowest practical cost. In support of this need the IDEAS program was initiated in 2010 with the goal of improving display system requirements associated with unmet user needs and expectations and disrupted acquisitions. This paper describes a process of identifying, rating, and selecting the design parameters that should receive research attention. Analyses of existing requirements documents reveal that between 40 and 50 specific design parameters (i.e., resolution, contrast, luminance, field of view, frame rate, etc.) are typically called out for the acquisition of a simulation training display system. Obviously no research effort can address the effects of this many parameters. Thus, we developed a defensible strategy for focusing limited R&D resources on a fraction of these parameters. This strategy encompasses six criteria to identify the parameters most worthy of research attention. Examples based on display design parameters recommended by stakeholders are provided.
Runkel, Robert L.
1998-01-01
OTIS is a mathematical simulation model used to characterize the fate and transport of water-borne solutes in streams and rivers. The governing equation underlying the model is the advection-dispersion equation with additional terms to account for transient storage, lateral inflow, first-order decay, and sorption. This equation and the associated equations describing transient storage and sorption are solved using a Crank-Nicolson finite-difference solution. OTIS may be used in conjunction with data from field-scale tracer experiments to quantify the hydrologic parameters affecting solute transport. This application typically involves a trial-and-error approach wherein parameter estimates are adjusted to obtain an acceptable match between simulated and observed tracer concentrations. Additional applications include analyses of nonconservative solutes that are subject to sorption processes or first-order decay. OTIS-P, a modified version of OTIS, couples the solution of the governing equation with a nonlinear regression package. OTIS-P determines an optimal set of parameter estimates that minimize the squared differences between the simulated and observed concentrations, thereby automating the parameter estimation process. This report details the development and application of OTIS and OTIS-P. Sections of the report describe model theory, input/output specifications, sample applications, and installation instructions.
NASA Astrophysics Data System (ADS)
Yang, Jiefan; Lei, Hengchi
2016-02-01
Cloud microphysical properties of a mixed phase cloud generated by a typical extratropical cyclone in the Tongliao area, Inner Mongolia on 3 May 2014, are analyzed primarily using in situ flight observation data. This study is mainly focused on ice crystal concentration, supercooled cloud water content, and vertical distributions of fit parameters of snow particle size distributions (PSDs). The results showed several discrepancies of microphysical properties obtained during two penetrations. During penetration within precipitating cloud, the maximum ice particle concentration, liquid water content, and ice water content were increased by a factor of 2-3 compared with their counterpart obtained during penetration of a nonprecipitating cloud. The heavy rimed and irregular ice crystals obtained by 2D imagery probe as well as vertical distributions of fitting parameters within precipitating cloud show that the ice particles grow during falling via riming and aggregation process, whereas the lightly rimed and pristine ice particles as well as fitting parameters within non-precipitating cloud indicate the domination of sublimation process. During the two cloud penetrations, the PSDs were generally better represented by gamma distributions than the exponential form in terms of the determining coefficient ( R 2). The correlations between parameters of exponential /gamma form within two penetrations showed no obvious differences compared with previous studies.
Agarabi, Cyrus D; Schiel, John E; Lute, Scott C; Chavez, Brittany K; Boyne, Michael T; Brorson, Kurt A; Khan, Mansoora; Read, Erik K
2015-06-01
Consistent high-quality antibody yield is a key goal for cell culture bioprocessing. This endpoint is typically achieved in commercial settings through product and process engineering of bioreactor parameters during development. When the process is complex and not optimized, small changes in composition and control may yield a finished product of less desirable quality. Therefore, changes proposed to currently validated processes usually require justification and are reported to the US FDA for approval. Recently, design-of-experiments-based approaches have been explored to rapidly and efficiently achieve this goal of optimized yield with a better understanding of product and process variables that affect a product's critical quality attributes. Here, we present a laboratory-scale model culture where we apply a Plackett-Burman screening design to parallel cultures to study the main effects of 11 process variables. This exercise allowed us to determine the relative importance of these variables and identify the most important factors to be further optimized in order to control both desirable and undesirable glycan profiles. We found engineering changes relating to culture temperature and nonessential amino acid supplementation significantly impacted glycan profiles associated with fucosylation, β-galactosylation, and sialylation. All of these are important for monoclonal antibody product quality. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Brockman, R. A.; Kramer, D. P.; Barklay, C. D.; ...
2011-10-01
Recent deep space missions utilize the thermal output of the radioisotope plutonium-238 as the fuel in the thermal to electrical power system. Since the application of plutonium in its elemental state has several disadvantages, the fuel employed in these deep space power systems is typically in the oxide form such as plutonium-238 dioxide ( 238PuO 2). As an oxide, the processing of the plutonium dioxide into fuel pellets is performed via ''classical'' ceramic processing unit operations such as sieving of the powder, pressing, sintering, etc. Modeling of these unit operations can be beneficial in the understanding and control of processingmore » parameters with the goal of further enhancing the desired characteristics of the 238PuO 2 fuel pellets. A finite element model has been used to help identify the time-temperature-stress profile within a pellet during a furnace operation taking into account that 238PuO 2 itself has a significant thermal output. The results of the modeling efforts will be discussed.« less
Chroni, Christina; Kyriacou, Adamadini; Manios, Thrassyvoulos; Lasaridi, Konstantia-Ekaterini
2009-08-01
In a bid to identify suitable microbial indicators of compost stability, the process evolution during windrow composting of poultry manure (PM), green waste (GW) and biowaste was studied. Treatments were monitored with regard to abiotic factors, respiration activity (determined using the SOUR test) and functional microflora. The composting process went through typical changes in temperature, moisture content and microbial properties, despite the inherent feedstock differences. Nitrobacter and pathogen indicators varied as a monotonous function of processing time. Some microbial groups have shown a potential to serve as fingerprints of the different process stages, but still they should be examined in context with respirometric tests and abiotic parameters. Respiration activity reflected well the process stage, verifying the value of respirometric tests to access compost stability. SOUR values below 1 mg O(2)/g VS/h were achieved for the PM and the GW compost.
Substituted amylose matrices for oral drug delivery
NASA Astrophysics Data System (ADS)
Moghadam, S. H.; Wang, H. W.; Saddar El-Leithy, E.; Chebli, C.; Cartilier, L.
2007-03-01
High amylose corn starch was used to obtain substituted amylose (SA) polymers by chemically modifying hydroxyl groups by an etherification process using 1,2-epoxypropanol. Tablets for drug-controlled release were prepared by direct compression and their release properties assessed by an in vitro dissolution test (USP XXIII no 2). The polymer swelling was characterized by measuring gravimetrically the water uptake ability of polymer tablets. SA hydrophilic matrix tablets present sequentially a burst effect, typical of hydrophilic matrices, and a near constant release, typical of reservoir systems. After the burst effect, surface pores disappear progressively by molecular association of amylose chains; this allows the creation of a polymer layer acting as a diffusion barrier and explains the peculiar behaviour of SA polymers. Several formulation parameters such as compression force, drug loading, tablet weight and insoluble diluent concentration were investigated. On the other hand, tablet thickness, scanning electron microscope analysis and mercury intrusion porosimetry showed that the high crushing strength values observed for SA tablets were due to an unusual melting process occurring during tabletting although the tablet external layer went only through densification, deformation and partial melting. In contrast, HPMC tablets did not show any traces of a melting process.
Changes in the microbial communities during co-composting of digestates☆
Franke-Whittle, Ingrid H.; Confalonieri, Alberto; Insam, Heribert; Schlegelmilch, Mirko; Körner, Ina
2014-01-01
Anaerobic digestion is a waste treatment method which is of increasing interest worldwide. At the end of the process, a digestate remains, which can gain added value by being composted. A study was conducted in order to investigate microbial community dynamics during the composting process of a mixture of anaerobic digestate (derived from the anaerobic digestion of municipal food waste), green wastes and a screened compost (green waste/kitchen waste compost), using the COMPOCHIP microarray. The composting process showed a typical temperature development, and the highest degradation rates occurred during the first 14 days of composting, as seen from the elevated CO2 content in the exhaust air. With an exception of elevated nitrite and nitrate levels in the day 34 samples, physical–chemical parameters for all compost samples collected during the 63 day process indicated typical composting conditions. The microbial communities changed over the 63 days of composting. According to principal component analysis of the COMPOCHIP microarray results, compost samples from the start of the experiment were found to cluster most closely with the digestate and screened compost samples. The green waste samples were found to group separately. All starting materials investigated were found to yield fewer and lower signals when compared to the samples collected during the composting experiment. PMID:24456768
NASA Astrophysics Data System (ADS)
Zhang, Haoyang; Fang, Fengzhou; Gilchrist, Michael D.; Zhang, Nan
2018-07-01
Micro injection moulding has been demonstrated as one of the most efficient mass production technologies for manufacturing polymeric microfluidic devices, which have been widely used in life sciences, environmental and analytical fields and agro-food industries. However, the filling of micro features for typical microfluidic devices is complicated and not yet fully understood, which consequently restricts the chip development. In the present work, a microfluidic flow cytometer chip with essential high aspect ratio micro features was used as a typical model to study their filling process. Short-shot experiments and single factor experiments were performed to examine the filling progress of such features during the injection and packing stages of the micro injection moulding process. The influence of process parameters such as shot size, packing pressure, packing time and mould temperature were systematically monitored, characterised and correlated with 3D measurements and real response of the machine such as screw velocity and screw position. A combined melt flow and creep deformation model was proposed to explain the complex influence of process on replication. An approach of over-shot micro injection moulding was proposed and was shown to be effective at improving the replication quality of high aspect ratio micro features.
Guo, Chaohua; Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.
Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Yonai, Shunsuke; Matsufuji, Naruhiro; Akahane, Keiichi
2018-04-23
The aim of this work was to estimate typical dose equivalents to out-of-field organs during carbon-ion radiotherapy (CIRT) with a passive beam for prostate cancer treatment. Additionally, sensitivity analyses of organ doses for various beam parameters and phantom sizes were performed. Because the CIRT out-of-field dose depends on the beam parameters, the typical values of those parameters were determined from statistical data on the target properties of patients who received CIRT at the Heavy-Ion Medical Accelerator in Chiba (HIMAC). Using these typical beam-parameter values, out-of-field organ dose equivalents during CIRT for typical prostate treatment were estimated by Monte Carlo simulations using the Particle and Heavy-Ion Transport Code System (PHITS) and the ICRP reference phantom. The results showed that the dose decreased with distance from the target, ranging from 116 mSv in the testes to 7 mSv in the brain. The organ dose equivalents per treatment dose were lower than those either in 6-MV intensity-modulated radiotherapy or in brachytherapy with an Ir-192 source for organs within 40 cm of the target. Sensitivity analyses established that the differences from typical values were within ∼30% for all organs, except the sigmoid colon. The typical out-of-field organ dose equivalents during passive-beam CIRT were shown. The low sensitivity of the dose equivalent in organs farther than 20 cm from the target indicated that individual dose assessments required for retrospective epidemiological studies may be limited to organs around the target in cases of passive-beam CIRT for prostate cancer. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
The Evolution of Dendrite Morphology during Isothermal Coarsening
NASA Technical Reports Server (NTRS)
Alkemper, Jens; Mendoza, Roberto; Kammer, Dimitris; Voorhees, Peter W.
2003-01-01
Dendrite coarsening is a common phenomenon in casting processes. From the time dendrites are formed until the inter-dendritic liquid is completely solidified dendrites are changing shape driven by variations in interfacial curvature along the dendrite and resulting in a reduction of total interfacial area. During this process the typical length-scale of the dendrite can change by orders of magnitude and the final microstructure is in large part determined by the coarsening parameters. Dendrite coarsening is thus crucial in setting the materials parameters of ingots and of great commercial interest. This coarsening process is being studied in the Pb-Sn system with Sn-dendrites undergoing isothermal coarsening in a Pb-Sn liquid. Results are presented for samples of approximately 60% dendritic phase, which have been coarsened for different lengths of times. Presented are three-dimensional microstructures obtained by serial-sectioning and an analysis of these microstructures with regard to interface orientation and interfacial curvatures. These graphs reflect the evolution of not only the microstructure itself, but also of the underlying driving forces of the coarsening process. As a visualization of the link between the microstructure and the driving forces a three-dimensional microstructure with the interfaces colored according to the local interfacial mean curvature is shown.
TVA-based assessment of visual attentional functions in developmental dyslexia
Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca
2014-01-01
There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129
Schmidt, James R; De Houwer, Jan; Rothermund, Klaus
2016-12-01
The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
Comparing a discrete and continuum model of the intestinal crypt
Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.
2011-01-01
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869
High Performance Materials Applications to Moon/Mars Missions and Bases
NASA Technical Reports Server (NTRS)
Noever, David A.; Smith, David D.; Sibille, Laurent; Brown, Scott C.; Cronise, Raymond J.; Lehoczky, Sandor L.
1998-01-01
Two classes of material processing scenarios will feature prominently in future interplanetary exploration- in situ production using locally available materials in lunar or planetary landings and high performance structural materials which carve out a set of properties for uniquely hostile space environments. To be competitive, high performance materials must typically offer orders of magnitude improvements in thermal conductivity or insulation, deliver high strength-to-weight ratios, or provide superior durability (low corrosion and/or ablative character, e.g. in heat shields). The space-related environmental parameters of high radiation flux, low weight and superior reliability limits many typical aerospace materials to a short list comprising high performance alloys, nanocomposites and thin-layer metal laminates (Al-Cu, Al-Ag) with typical dimensions less than the Frank-Reed-type dislocation source. Extremely light weight carbon-carbon composites and car on aerogels will be presented as novel examples which define broadened material parameters, particularly owing to their extreme thermal insulation (R-32-64) and low densities (less than 0.01 g/cc) approaching that of air itself. Even with these low weight payload additions, rocket thrust limits and transport costs will always place a premium on assembling as much structural and life support resources upon interplanetary, lunar or asteroid arrival. As an example for in situ lunar glass manufacture, solar furnaces reaching 1700 C for pure silica glass manufacture in situ are compared with sol-gel technology and acid-leached ultrapure (less than 0.1% FeO) silica aerogel precursors.
High Performance Materials Applications to Moon/Mars Missions and Bases
NASA Technical Reports Server (NTRS)
Noever, David A.; Smith, David D.; Sibille, Laurent; Brown, Scott C.; Cronise, Raymond J.; Lehoczky, Sandor L.
1998-01-01
Two classes of material processing scenarios will feature prominently in future interplanetary exploration: in situ production using locally available materials in lunar or planetary landings and high performance structural materials which carve out a set of properties for uniquely hostile space environments. To be competitive, high performance materials must typically offer orders of magnitude improvements in thermal conductivity or insulation, deliver high strength-to-weight ratios, or provide superior durability (low corrosion and/or ablative character, e.g., in heat shields). The space-related environmental parameters of high radiation flux, low weight, and superior reliability limits many typical aerospace materials to a short list comprising high performance alloys, nanocomposites and thin-layer metal laminates (Al-Cu, Al-Ag) with typical dimensions less than the Frank-Reed-type dislocation source. Extremely light weight carbon-carbon composites and carbon aerogels will be presented as novel examples which define broadened material parameters, particularly owing to their extreme thermal insulation (R-32-64) and low densities (<0.01 g/cu cm) approaching that of air itself. Even with these low-weight payload additions, rocket thrust limits and transport costs will always place a premium on assembling as much structural and life support resources upon interplanetary, lunar, or asteroid arrival. As an example, for in situ lunar glass manufacture, solar furnaces reaching 1700 C for pure silica glass manufacture in situ are compared with sol-gel technology and acid-leached ultrapure (<0.1% FeO) silica aerogel precursors.
Wei, Fanan; Yang, Haitao; Liu, Lianqing; Li, Guangyong
2017-03-01
Dynamic mechanical behaviour of living cells has been described by viscoelasticity. However, quantitation of the viscoelastic parameters for living cells is far from sophisticated. In this paper, combining inverse finite element (FE) simulation with Atomic Force Microscope characterization, we attempt to develop a new method to evaluate and acquire trustworthy viscoelastic index of living cells. First, influence of the experiment parameters on stress relaxation process is assessed using FE simulation. As suggested by the simulations, cell height has negligible impact on shape of the force-time curve, i.e. the characteristic relaxation time; and the effect originates from substrate can be totally eliminated when stiff substrate (Young's modulus larger than 3 GPa) is used. Then, so as to develop an effective optimization strategy for the inverse FE simulation, the parameters sensitivity evaluation is performed for Young's modulus, Poisson's ratio, and characteristic relaxation time. With the experiment data obtained through typical stress relaxation measurement, viscoelastic parameters are extracted through the inverse FE simulation by comparing the simulation results and experimental measurements. Finally, reliability of the acquired mechanical parameters is verified with different load experiments performed on the same cell.
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
Parameter Estimation and Model Selection in Computational Biology
Lillacci, Gabriele; Khammash, Mustafa
2010-01-01
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262
Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation
De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan
2017-01-01
Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006
NASA Astrophysics Data System (ADS)
Yi, Guodong; Li, Jin
2018-03-01
The master cylinder hydraulic system is the core component of the fineblanking press that seriously affects the machine performance. A key issue in the design of the master cylinder hydraulic system is dealing with the heavy shock loads in the fineblanking process. In this paper, an equivalent model of the master cylinder hydraulic system is established based on typical process parameters for practical fineblanking; then, the response characteristics of the master cylinder slider to the step changes in the load and control current are analyzed, and lastly, control strategies for the proportional valve are studied based on the impact of the control parameters on the kinetic stability of the slider. The results show that the kinetic stability of the slider is significantly affected by the step change of the control current, while it is slightly affected by the step change of the system load, which can be improved by adjusting the flow rate and opening time of the proportional valve.
Risk status for dropping out of developmental followup for very low birth weight infants.
Catlett, A T; Thompson, R J; Johndrow, D A; Boshkoff, M R
1993-01-01
Not keeping scheduled visits for medical care is a major health care issue. Little research has addressed how the interaction of demographic and biomedical parameters with psychosocial processes has an impact on appointment keeping. Typical factors are stress of daily living, methods of coping, social support, and instrumental support (that is, tangible assistance). In this study, the authors examine the role of these parameters and processes in the risk status for dropping out of a developmental followup program for very low birth weight infants. The findings suggest that the stress of daily living is a significant predictor for the mother's return when the infant is 6 months of age (corrected for prematurity). The predictors for return at 24 months corrected age include marital status, race, gestational age of the infant, maternal intelligence, and efficacy expectations. Providing transportation was found to be a successful intervention strategy for a subgroup at very high risk for dropping out due to a constellation of biomedical, demographic, and psychosocial factors.
Risk status for dropping out of developmental followup for very low birth weight infants.
Catlett, A T; Thompson, R J; Johndrow, D A; Boshkoff, M R
1993-01-01
Not keeping scheduled visits for medical care is a major health care issue. Little research has addressed how the interaction of demographic and biomedical parameters with psychosocial processes has an impact on appointment keeping. Typical factors are stress of daily living, methods of coping, social support, and instrumental support (that is, tangible assistance). In this study, the authors examine the role of these parameters and processes in the risk status for dropping out of a developmental followup program for very low birth weight infants. The findings suggest that the stress of daily living is a significant predictor for the mother's return when the infant is 6 months of age (corrected for prematurity). The predictors for return at 24 months corrected age include marital status, race, gestational age of the infant, maternal intelligence, and efficacy expectations. Providing transportation was found to be a successful intervention strategy for a subgroup at very high risk for dropping out due to a constellation of biomedical, demographic, and psychosocial factors. PMID:8210257
Numerical and Experimental Investigations of Humping Phenomena in Laser Micro Welding
NASA Astrophysics Data System (ADS)
Otto, Andreas; Patschger, Andreas; Seiler, Michael
The Humping effect is a phenomenon which is observed approximately since 50 years in various welding procedures and is characterized by droplets due to a pile-up of the melt pool. It occurs within a broad range of process parameters. Particularly during micro welding, humping effect is critical due to typically high feed rates. In the past, essentially two approaches (fluid-dynamic approach of streaming melt within the molten pool and the Plateau-Rayleigh instability of a liquid jet) were discussed in order to explain the occurrence of the humping effect. But none of both can fully explain all observed effects. For this reason, experimental studies in micro welding of thin metal foils were performed in order to determine the influence of process parameters on the occurrence of humping effects. The experimental observations were compared with results from numerical multi-physical simulations (incorporating beam propagation, incoupling, heat transfer, fluid dynamics etc.) to provide a deeper understanding of the causes for hump formation.
Oscillatory multiphase flow strategy for chemistry and biology.
Abolhasani, Milad; Jensen, Klavs F
2016-07-19
Continuous multiphase flow strategies are commonly employed for high-throughput parameter screening of physical, chemical, and biological processes as well as continuous preparation of a wide range of fine chemicals and micro/nano particles with processing times up to 10 min. The inter-dependency of mixing and residence times, and their direct correlation with reactor length have limited the adaptation of multiphase flow strategies for studies of processes with relatively long processing times (0.5-24 h). In this frontier article, we describe an oscillatory multiphase flow strategy to decouple mixing and residence times and enable investigation of longer timescale experiments than typically feasible with conventional continuous multiphase flow approaches. We review current oscillatory multiphase flow technologies, provide an overview of the advancements of this relatively new strategy in chemistry and biology, and close with a perspective on future opportunities.
Influence of season and type of restaurants on sashimi microbiota.
Miguéis, S; Moura, A T; Saraiva, C; Esteves, A
2016-10-01
In recent years, an increase in the consumption of Japanese food in European countries has been verified, including in Portugal. These specialities made with raw fish, typical Japanese meals, have been prepared in typical and on non-typical restaurants, and represent a challenge to risk analysis on HACCP plans. The aim of this study was to evaluate the influence of the type of restaurant, season and type of fish used on sashimi microbiota. Sashimi samples (n = 114) were directly collected from 23 sushi restaurants and were classified as Winter and Summer Samples. They were also categorized according to the type of restaurant where they were obtained: as typical or non-typical. The samples were processed using international standards procedures. A middling seasonality influence was observed in microbiota using mesophilic aerobic bacteria, psychrotrophic microorganisms, Lactic acid bacteria, Pseudomonas spp., H 2 S positive bacteria, mould and Bacillus cereus counts parameters. During the Summer Season, samples classified as unacceptable or potentially Hazardous were observed. Non-typical restaurants had the most cases of Unacceptable/potentially hazardous samples 83.33%. These unacceptable results were obtained as a result of high values of pathogenic bacteria like Listeria monocytogenes and Staphylococcus aureus No significant differences were observed on microbiota counts from different fish species. The need to implement more accurate food safety systems was quite evident, especially in the warmer season, as well as in restaurants where other kinds of food, apart from Japanese meals, was prepared. © Crown copyright 2016.
Enhancement of low power CO2 laser cutting process for injection molded polycarbonate
NASA Astrophysics Data System (ADS)
Moradi, Mahmoud; Mehrabi, Omid; Azdast, Taher; Benyounis, Khaled Y.
2017-11-01
Laser cutting technology is a non-contact process that typically is used for industrial manufacturing applications. Laser cut quality is strongly influenced by the cutting processing parameters. In this research, CO2 laser cutting specifications have been investigated by using design of experiments (DOE) with considering laser cutting speed, laser power and focal plane position as process input parameters and kerf geometry dimensions (i.e. top and bottom kerf width, ratio of the upper kerf to lower kerf, upper heat affected zone (HAZ)) and surface roughness of the kerf wall as process output responses. A 60 Watts CO2 laser cutting machine is used for cutting the injection molded samples of polycarbonate sheet with the thickness of 3.2 mm. Results reveal that by decreasing the laser focal plane position and laser power, the bottom kerf width will be decreased. Also the bottom kerf width decreases by increasing the cutting speed. As a general result, locating the laser spot point in the depth of the workpiece the laser cutting quality increases. Minimum value of the responses (top kerf, heat affected zone, ratio of the upper kerf to lower kerf, and surface roughness) are considered as optimization criteria. Validating the theoretical results using the experimental tests is carried out in order to analyze the results obtained via software.
Decomposing ADHD-Related Effects in Response Speed and Variability
Karalunas, Sarah L.; Huang-Pollock, Cynthia L.; Nigg, Joel T.
2012-01-01
Objective Slow and variable reaction times (RTs) on fast tasks are such a prominent feature of Attention Deficit Hyperactivity Disorder (ADHD) that any theory must account for them. However, this has proven difficult because the cognitive mechanisms responsible for this effect remain unexplained. Although speed and variability are typically correlated, it is unclear whether single or multiple mechanisms are responsible for group differences in each. RTs are a result of several semi-independent processes, including stimulus encoding, rate of information processing, speed-accuracy trade-offs, and motor response, which have not been previously well characterized. Method A diffusion model was applied to RTs from a forced-choice RT paradigm in two large, independent case-control samples (NCohort 1= 214 and N Cohort 2=172). The decomposition measured three validated parameters that account for the full RT distribution, and assessed reproducibility of ADHD effects. Results In both samples, group differences in traditional RT variables were explained by slow information processing speed, and unrelated to speed-accuracy trade-offs or non-decisional processes (e.g. encoding, motor response). Conclusions RT speed and variability in ADHD may be explained by a single information processing parameter, potentially simplifying explanations that assume different mechanisms are required to account for group differences in the mean and variability of RTs. PMID:23106115
Mechanism for Plasma Etching of Shallow Trench Isolation Features in an Inductively Coupled Plasma
NASA Astrophysics Data System (ADS)
Agarwal, Ankur; Rauf, Shahid; He, Jim; Choi, Jinhan; Collins, Ken
2011-10-01
Plasma etching for microelectronics fabrication is facing extreme challenges as processes are developed for advanced technological nodes. As device sizes shrink, control of shallow trench isolation (STI) features become more important in both logic and memory devices. Halogen-based inductively coupled plasmas in a pressure range of 20-60 mTorr are typically used to etch STI features. The need for improved performance and shorter development cycles are placing greater emphasis on understanding the underlying mechanisms to meet process specifications. In this work, a surface mechanism for STI etch process will be discussed that couples a fundamental plasma model to experimental etch process measurements. This model utilizes ion/neutral fluxes and energy distributions calculated using the Hybrid Plasma Equipment Model. Experiments are for blanket Si wafers in a Cl2/HBr/O2/N2 plasma over a range of pressures, bias powers, and flow rates of feedstock gases. We found that kinetic treatment of electron transport was critical to achieve good agreement with experiments. The calibrated plasma model is then coupled to a string-based feature scale model to quantify the effect of varying process parameters on the etch profile. We found that the operating parameters strongly influence critical dimensions but have only a subtle impact on the etch depths.
Effect of preheating on fatigue resistance of gears in spin induction coil hardening process
NASA Astrophysics Data System (ADS)
Kumar, Pawan; Aggarwal, M. L.
2018-02-01
Spin hardening inductors are typically used for fine-sized teeth gear geometry. With the proper selection of several design parameters, only the gear teeth can be case surface hardened without affecting the other surface of gear. Preheating may be done to reach an adapted high austenitizing temperature in the root circle to avoid overheating of the tooth tip during final heating. The effect of preheating of gear on control of compressive residual stresses and case hardening has been experimentally discussed in this paper. Present work is about analysing single frequency mode, preheat hardening treatment and compressive residual stresses field for hardening process of spur gear using spin hardening inductors.
Reduced order model based on principal component analysis for process simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Y.; Malacina, A.; Biegler, L.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less
Image simulation for automatic license plate recognition
NASA Astrophysics Data System (ADS)
Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José
2012-01-01
Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
General rigid motion correction for computed tomography imaging based on locally linear embedding
NASA Astrophysics Data System (ADS)
Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge
2018-02-01
The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.
Sader, John E; Yousefi, Morteza; Friend, James R
2014-02-01
Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sader, John E., E-mail: jsader@unimelb.edu.au; Yousefi, Morteza; Friend, James R.
2014-02-15
Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noisemore » spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.« less
Modeling of the Multiparameter Inverse Task of Transient Thermography
NASA Technical Reports Server (NTRS)
Plotnikov, Y. A.
1998-01-01
Transient thermography employs preheated surface temperature variations caused by delaminations, cracks, voids, corroded regions, etc. Often, it is enough to detect these changes to declare a defect in a workpiece. It is also desirable to obtain additional information about the defect from the thermal response. The planar size, depth, and thermal resistance of the detected defects are the parameters of interest. In this paper a digital image processing technique is applied to simulated thermal responses in order to obtain the geometry of the inclusion-type defects in a flat panel. A three-dimensional finite difference model in Cartesian coordinates is used for the numerical simulations. Typical physical properties of polymer graphite composites are assumed. Using different informative parameters of the thermal response for depth estimation is discussed.
A review of model applications for structured soils: b) Pesticide transport.
Köhne, John Maximilian; Köhne, Sigrid; Simůnek, Jirka
2009-02-16
The past decade has seen considerable progress in the development of models simulating pesticide transport in structured soils subject to preferential flow (PF). Most PF pesticide transport models are based on the two-region concept and usually assume one (vertical) dimensional flow and transport. Stochastic parameter sets are sometimes used to account for the effects of spatial variability at the field scale. In the past decade, PF pesticide models were also coupled with Geographical Information Systems (GIS) and groundwater flow models for application at the catchment and larger regional scales. A review of PF pesticide model applications reveals that the principal difficulty of their application is still the appropriate parameterization of PF and pesticide processes. Experimental solution strategies involve improving measurement techniques and experimental designs. Model strategies aim at enhancing process descriptions, studying parameter sensitivity, uncertainty, inverse parameter identification, model calibration, and effects of spatial variability, as well as generating model emulators and databases. Model comparison studies demonstrated that, after calibration, PF pesticide models clearly outperform chromatographic models for structured soils. Considering nonlinear and kinetic sorption reactions further enhanced the pesticide transport description. However, inverse techniques combined with typically available experimental data are often limited in their ability to simultaneously identify parameters for describing PF, sorption, degradation and other processes. On the other hand, the predictive capacity of uncalibrated PF pesticide models currently allows at best an approximate (order-of-magnitude) estimation of concentrations. Moreover, models should target the entire soil-plant-atmosphere system, including often neglected above-ground processes such as pesticide volatilization, interception, sorption to plant residues, root uptake, and losses by runoff. The conclusions compile progress, problems, and future research choices for modelling pesticide displacement in structured soils.
Urzay, Javier; Llewellyn Smith, Stefan G; Thompson, Elinor; Glover, Beverley J
2009-08-21
Plant reproduction depends on pollen dispersal. For anemophilous (wind-pollinated) species, such as grasses and many trees, shedding pollen from the anther must be accomplished by physical mechanisms. The unknown nature of this process has led to its description as the 'paradox of pollen liberation'. A simple scaling analysis, supported by experimental measurements on typical wind-pollinated plant species, is used to estimate the suitability of previous resolutions of this paradox based on wind-gust aerodynamic models of fungal-spore liberation. According to this scaling analysis, the steady Stokes drag force is found to be large enough to liberate anemophilous pollen grains, and unsteady boundary-layer forces produced by wind gusts are found to be mostly ineffective since the ratio of the characteristic viscous time scale to the inertial time scale of acceleration of the wind stream is a small parameter for typical anemophilous species. A hypothetical model of a stochastic aeroelastic mechanism, initiated by the atmospheric turbulence typical of the micrometeorological conditions in the vicinity of the plant, is proposed to contribute to wind pollination.
Explanation of the cw operation of the Er3+ 3-μm crystal laser
NASA Astrophysics Data System (ADS)
Pollnau, M.; Graf, Th.; Balmer, J. E.; Lüthy, W.; Weber, H. P.
1994-05-01
A computer simulation of the Er3+ 3-μm crystal laser considering the full rate-equation scheme up to the 4F7/2 level has been performed. The influence of the important system parameters on lasing and the interaction of these parameters has been clarified with multiple-parameter variations. Stimulated emission is fed mainly by up-conversion from the lower laser level and in many cases is reduced by the quenching of the lifetime of this level. However, also without up-conversion a set of parameters can be found that allows lasing. Up-conversion from the upper laser level is detrimental to stimulated emission but may be compensated by cross relaxation from the 4S3/2 level. For a typical experimental situation we started with the parameters of Er3+:LiYF4. In addition, the host materials Y3Al5O12 (YAG), YAlO3, Y3Sc2Al3O12 (YSGG), and BaY2F8, as well as the possibilities of codoping, are discussed. In view of the consideration of all excited levels up to 4F7/2, all lifetimes and branching ratios, ground-state depletion, excited-state absorption, three up-conversion processes as well as their inverse processes, stimulated emission, and a realistic resonator design, this is, to our knowledge, the most detailed investigation of the Er3+ 3-μm crystal laser performed so far.
Assay Development Process | Office of Cancer Clinical Proteomics Research
Typical steps involved in the development of a mass spectrometry-based targeted assay include: (1) selection of surrogate or signature peptides corresponding to the targeted protein or modification of interest; (2) iterative optimization of instrument and method parameters for optimal detection of the selected peptide; (3) method development for protein extraction from biological matrices such as tissue, whole cell lysates, or blood plasma/serum and proteolytic digestion of proteins (usually with trypsin); (4) evaluation of the assay in the intended biological matrix to determine if e
Fabrication of PLA Filaments and its Printable Performance
NASA Astrophysics Data System (ADS)
Liu, Wenjie; Zhou, Jianping; Ma, Yuming; Wang, Jie; Xu, Jie
2017-12-01
Fused deposition modeling (FDM) is a typical 3D printing technology and preparation of qualified filaments is the basis. In order to prepare polylactic acid (PLA) filaments suitable for personalized FDM 3D printing, this article investigated the effect of factors such as extrusion temperature and screw speed on the diameter, surface roughness and ultimate tensile stress of the obtained PLA filaments. The optimal process parameters for fabrication of qualified filaments were determined. Further, the printable performance of the obtained PLA filaments for 3D objects was preliminarily explored.
Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance
NASA Astrophysics Data System (ADS)
Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola
2013-04-01
Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
Anisotropic Mesoscale Eddy Transport in Ocean General Circulation Models
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.; Bachman, S.; Bryan, F.; Dennis, J.; Danabasoglu, G.
2014-12-01
Modern climate models are limited to coarse-resolution representations of large-scale ocean circulation that rely on parameterizations for mesoscale eddies. The effects of eddies are typically introduced by relating subgrid eddy fluxes to the resolved gradients of buoyancy or other tracers, where the proportionality is, in general, governed by an eddy transport tensor. The symmetric part of the tensor, which represents the diffusive effects of mesoscale eddies, is universally treated isotropically in general circulation models. Thus, only a single parameter, namely the eddy diffusivity, is used at each spatial and temporal location to impart the influence of mesoscale eddies on the resolved flow. However, the diffusive processes that the parameterization approximates, such as shear dispersion, potential vorticity barriers, oceanic turbulence, and instabilities, typically have strongly anisotropic characteristics. Generalizing the eddy diffusivity tensor for anisotropy extends the number of parameters to three: a major diffusivity, a minor diffusivity, and the principal axis of alignment. The Community Earth System Model (CESM) with the anisotropic eddy parameterization is used to test various choices for the newly introduced parameters, which are motivated by observations and the eddy transport tensor diagnosed from high resolution simulations. Simply setting the ratio of major to minor diffusivities to a value of five globally, while aligning the major axis along the flow direction, improves biogeochemical tracer ventilation and reduces global temperature and salinity biases. These effects can be improved even further by parameterizing the anisotropic transport mechanisms in the ocean.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, G.O.; Dress, W.B.; Kercel, S.W.
1999-05-10
A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used asmore » a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [ , {parameter Iist}, confidence factor]. (1) For this study, the model that characterized cavitation was a chirped-exponentially decaying sinusoid. Using the parameters defined by this model, the parameter list included frequency, decay, and chirp rate. Based on this, the process feature vector has the form: @=> [, {01 = a, ~= b, ~ = c}, cf = 0.80]. (2) In this experiment a reversible catastrophe was examined. The reason for this is that the same catastrophe could be repeated to ensure the statistical significance of the data.« less
Level density parameter behaviour at high excitation energy
NASA Astrophysics Data System (ADS)
D'Arrigo, A.; Giardina, G.; Taccone, A.
1991-06-01
We present a formalism to calculate the intrinsic (without collective effects) and effective (with collective effects) level density parameters over a wide range of excitation energy up to 180 MeV. The behaviour of aint and aeff as an energy function is shown for several typical nuclei (115Cd, 129Te, 148Pm, 173Yb, 192Ir and 248Cm). Moreover, local systematics of the parameter aeff as a function of the neutron number N, also for nuclei extremely far from the β-line, is shown for some typical nuclei (Rb, Pd, Sn, Ba and Hg) at excitation energies of 15, 80 and 150 MeV.
Tethered Satellites as Enabling Platforms for an Operational Space Weather Monitoring System
NASA Technical Reports Server (NTRS)
Krause, L. Habash; Gilchrist, B. E.; Bilen, S.; Owens, J.; Voronka, N.; Furhop, K.
2013-01-01
Space weather nowcasting and forecasting models require assimilation of near-real time (NRT) space environment data to improve the precision and accuracy of operational products. Typically, these models begin with a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g. via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative physics-based forward-prediction calculations. The issue of required space weather observatories to meet the spatial and temporal requirements of these models is a complex one, and we do not address that with this poster. Instead, we present some examples of how tethered satellites can be used to address the shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include very long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements will be presented for each space weather parameter considered in this study.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
Meigal, Alexander Yu.; Miroshnichenko, German G.; Kuzmina, Anna P.; Rissanen, Saara M.; Georgiadis, Stefanos D.; Karjalainen, Pasi A.
2015-01-01
We compared a set of surface EMG (sEMG) parameters in several groups of schizophrenia (SZ, n = 74) patients and healthy controls (n = 11) and coupled them with the clinical data. sEMG records were quantified with spectral, mutual information (MI) based and recurrence quantification analysis (RQA) parameters, and with approximate and sample entropies (ApEn and SampEn). Psychotic deterioration was estimated with Positive and Negative Syndrome Scale (PANSS) and with the positive subscale of PANSS. Neuroleptic-induced parkinsonism (NIP) motor symptoms were estimated with Simpson-Angus Scale (SAS). Dyskinesia was measured with Abnormal Involuntary Movement Scale (AIMS). We found that there was no difference in values of sEMG parameters between healthy controls and drug-naïve SZ patients. The most specific group was formed of SZ patients who were administered both typical and atypical antipsychotics (AP). Their sEMG parameters were significantly different from those of SZ patients taking either typical or atypical AP or taking no AP. This may represent a kind of synergistic effect of these two classes of AP. For the clinical data we found that PANSS, SAS, and AIMS were not correlated to any of the sEMG parameters. Conclusion: with nonlinear parameters of sEMG it is possible to reveal NIP in SZ patients, and it may help to discriminate between different clinical groups of SZ patients. Combined typical and atypical AP therapy has stronger effect on sEMG than a therapy with AP of only one class. PMID:26217236
Meigal, Alexander Yu; Miroshnichenko, German G; Kuzmina, Anna P; Rissanen, Saara M; Georgiadis, Stefanos D; Karjalainen, Pasi A
2015-01-01
We compared a set of surface EMG (sEMG) parameters in several groups of schizophrenia (SZ, n = 74) patients and healthy controls (n = 11) and coupled them with the clinical data. sEMG records were quantified with spectral, mutual information (MI) based and recurrence quantification analysis (RQA) parameters, and with approximate and sample entropies (ApEn and SampEn). Psychotic deterioration was estimated with Positive and Negative Syndrome Scale (PANSS) and with the positive subscale of PANSS. Neuroleptic-induced parkinsonism (NIP) motor symptoms were estimated with Simpson-Angus Scale (SAS). Dyskinesia was measured with Abnormal Involuntary Movement Scale (AIMS). We found that there was no difference in values of sEMG parameters between healthy controls and drug-naïve SZ patients. The most specific group was formed of SZ patients who were administered both typical and atypical antipsychotics (AP). Their sEMG parameters were significantly different from those of SZ patients taking either typical or atypical AP or taking no AP. This may represent a kind of synergistic effect of these two classes of AP. For the clinical data we found that PANSS, SAS, and AIMS were not correlated to any of the sEMG parameters. with nonlinear parameters of sEMG it is possible to reveal NIP in SZ patients, and it may help to discriminate between different clinical groups of SZ patients. Combined typical and atypical AP therapy has stronger effect on sEMG than a therapy with AP of only one class.
Computer Simulation in Predicting Biochemical Processes and Energy Balance at WWTPs
NASA Astrophysics Data System (ADS)
Drewnowski, Jakub; Zaborowska, Ewa; Hernandez De Vega, Carmen
2018-02-01
Nowadays, the use of mathematical models and computer simulation allow analysis of many different technological solutions as well as testing various scenarios in a short time and at low financial budget in order to simulate the scenario under typical conditions for the real system and help to find the best solution in design or operation process. The aim of the study was to evaluate different concepts of biochemical processes and energy balance modelling using a simulation platform GPS-x and a comprehensive model Mantis2. The paper presents the example of calibration and validation processes in the biological reactor as well as scenarios showing an influence of operational parameters on the WWTP energy balance. The results of batch tests and full-scale campaign obtained in the former work were used to predict biochemical and operational parameters in a newly developed plant model. The model was extended with sludge treatment devices, including anaerobic digester. Primary sludge removal efficiency was found as a significant factor determining biogas production and further renewable energy production in cogeneration. Water and wastewater utilities, which run and control WWTP, are interested in optimizing the process in order to save environment, their budget and decrease the pollutant emissions to water and air. In this context, computer simulation can be the easiest and very useful tool to improve the efficiency without interfering in the actual process performance.
Rapid Processing of Net-Shape Thermoplastic Planar-Random Composite Preforms
NASA Astrophysics Data System (ADS)
Jespersen, S. T.; Baudry, F.; Schmäh, D.; Wakeman, M. D.; Michaud, V.; Blanchard, P.; Norris, R. E.; Månson, J.-A. E.
2009-02-01
A novel thermoplastic composite preforming and moulding process is investigated to target cost issues in textile composite processing associated with trim waste, and the limited mechanical properties of current bulk flow-moulding composites. The thermoplastic programmable powdered preforming process (TP-P4) uses commingled glass and polypropylene yarns, which are cut to length before air assisted deposition onto a vacuum screen, enabling local preform areal weight tailoring. The as-placed fibres are heat-set for improved handling before an optional preconsolidation stage. The preforms are then preheated and press formed to obtain the final part. The process stages are examined to optimize part quality and throughput versus processing parameters. A viable processing route is proposed with typical cycle times below 40 s (for a plate 0.5 × 0.5 m2, weighing 2 kg), enabling high production capacity from one line. The mechanical performance is shown to surpass that of 40 wt.% GMT and has properties equivalent to those of 40 wt.% GMTex at both 20°C and 80°C.
Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander
2017-01-01
Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination (R2) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R2=0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD. PMID:28176905
Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander
2017-01-01
Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination ( R 2 ) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R 2 =0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD.
Applying the Theory of Constraints to a Base Civil Engineering Operations Branch
1991-09-01
Figure Page 1. Typical Work Order Processing . .......... 7 2. Typical Job Order Processing . .......... 8 3. Typical Simplified In-Service Work Plan for...Customers’ Customer Request Service Planning Unit Production] Control Center Material Control Scheduling CE Shops Figure 1.. Typical Work Order Processing 7
Fast automated analysis of strong gravitational lenses with convolutional neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-30
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
NASA Astrophysics Data System (ADS)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-01
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Dynamic nuclear polarization assisted spin diffusion for the solid effect case.
Hovav, Yonatan; Feintuch, Akiva; Vega, Shimon
2011-02-21
The dynamic nuclear polarization (DNP) process in solids depends on the magnitudes of hyperfine interactions between unpaired electrons and their neighboring (core) nuclei, and on the dipole-dipole interactions between all nuclei in the sample. The polarization enhancement of the bulk nuclei has been typically described in terms of a hyperfine-assisted polarization of a core nucleus by microwave irradiation followed by a dipolar-assisted spin diffusion process in the core-bulk nuclear system. This work presents a theoretical approach for the study of this combined process using a density matrix formalism. In particular, solid effect DNP on a single electron coupled to a nuclear spin system is considered, taking into account the interactions between the spins as well as the main relaxation mechanisms introduced via the electron, nuclear, and cross-relaxation rates. The basic principles of the DNP-assisted spin diffusion mechanism, polarizing the bulk nuclei, are presented, and it is shown that the polarization of the core nuclei and the spin diffusion process should not be treated separately. To emphasize this observation the coherent mechanism driving the pure spin diffusion process is also discussed. In order to demonstrate the effects of the interactions and relaxation mechanisms on the enhancement of the nuclear polarization, model systems of up to ten spins are considered and polarization buildup curves are simulated. A linear chain of spins consisting of a single electron coupled to a core nucleus, which in turn is dipolar coupled to a chain of bulk nuclei, is considered. The interaction and relaxation parameters of this model system were chosen in a way to enable a critical analysis of the polarization enhancement of all nuclei, and are not far from the values of (13)C nuclei in frozen (glassy) organic solutions containing radicals, typically used in DNP at high fields. Results from the simulations are shown, demonstrating the complex dependences of the DNP-assisted spin diffusion process on variations of the relevant parameters. In particular, the effect of the spin lattice relaxation times on the polarization buildup times and the resulting end polarization are discussed, and the quenching of the polarizations by the hyperfine interaction is demonstrated.
Effects of Process Parameters on Ultrasonic Micro-Hole Drilling in Glass and Ruby
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schorderet, Alain; Deghilage, Emmanuel; Agbeviade, Kossi
2011-05-04
Brittle materials such as ceramics, glasses and oxide single crystals find increasing applications in advanced micro-engineering products. Machining small features in such materials represents a manufacturing challenge. Ultrasonic drilling constitutes a promising technique for realizing simple micro-holes of high diameter-to-depth ratio. The process involves impacting abrasive particles in suspension in a liquid slurry between tool and work piece. Among the process performance criteria, the drilling time (productivity) is one of the most important quantities to evaluate the suitability of the process for industrial applications.This paper summarizes recent results pertaining to the ultrasonic micro-drilling process obtained with a semi-industrial 3-axis machine.more » The workpiece is vibrated at 40 kHz frequency with an amplitude of several micrometers. A voice-coil actuator and a control loop based on the drilling force impose the tool feed. In addition, the tool is rotated at a prescribed speed to improve the drilling speed as well as the hole geometry. Typically, a WC wire serves as tool to bore 200 {mu}m diameter micro-holes of 300 to 1,000 {mu}m depth in glass and ruby. The abrasive slurry contains B4C particles of 1 {mu}m to 5 {mu}m diameter in various concentrations.This paper discusses, on the basis of the experimental results, the influence of several parameters on the drilling time. First, the results show that the control strategy based on the drilling force allows to reach higher feed rates (avoiding tool breakage). Typically, a 8 um/s feed rate is achieved with glass and 0.9 {mu}m/s with ruby. Tool rotation, even for values as low as 50 rpm, increases productivity and improves holes geometry. Drilling with 1 {mu}m and 5 {mu}m B4C particles yields similar productivity results. Our future research will focus on using the presented results to develop a model that can serve to optimize the process for different applications.« less
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Bashir, Mahmood Al; Dhar, Nikhil Ranjan
2016-07-01
Hard turning is gradually replacing the time consuming conventional turning process, which is typically followed by grinding, by producing surface quality compatible to grinding. The hard turned surface roughness depends on the cutting parameters, machining environments and tool insert configurations. In this article the variation of the surface roughness of the produced surfaces with the changes in tool insert configuration, use of coolant and different cutting parameters (cutting speed, feed rate) has been investigated. This investigation was performed in machining AISI 1060 steel, hardened to 56 HRC by heat treatment, using coated carbide inserts under two different machining environments. The depth of cut, fluid pressure and material hardness were kept constant. The Design of Experiment (DOE) was performed to determine the number and combination sets of different cutting parameters. A full factorial analysis has been performed to examine the effect of main factors as well as interaction effect of factors on surface roughness. A statistical analysis of variance (ANOVA) was employed to determine the combined effect of cutting parameters, environment and tool configuration. The result of this analysis reveals that environment has the most significant impact on surface roughness followed by feed rate and tool configuration respectively.
Influence of Powder Injection Parameters in High-Pressure Cold Spray
NASA Astrophysics Data System (ADS)
Ozdemir, Ozan C.; Widener, Christian A.
2017-10-01
High-pressure cold spray systems are becoming widely accepted for use in the structural repair of surface defects of expensive machinery parts used in industrial and military equipment. The deposition quality of cold spray repairs is typically validated using coupon testing and through destructive analysis of mock-ups or first articles for a defined set of parameters. In order to provide a reliable repair, it is important to not only maintain the same processing parameters, but also to have optimum fixed parameters, such as the particle injection location. This study is intended to provide insight into the sensitivity of the way that the powder is injected upstream of supersonic nozzles in high-pressure cold spray systems and the effects of variations in injection parameters on the nature of the powder particle kinetics. Experimentally validated three-dimensional computational fluid dynamics (3D CFD) models are implemented to study the particle impact conditions for varying powder feeder tube size, powder feeder tube axial misalignment, and radial powder feeder injection location on the particle velocity and the deposition shape of aluminum alloy 6061. Outputs of the models are statistically analyzed to explore the shape of the spray plume distribution and resulting coating buildup.
NASA Astrophysics Data System (ADS)
Sánchez Gácita, Madeleine; Longo, Karla M.; Freitas, Saulo R.; Martin, Scot T.
2015-04-01
The biomass burning activity constitutes an important source of aerosols and trace gases to the atmosphere globally. In South America, during the dry season, aerosols prevenient from biomass burning are typically transported to long distances from its sources before being removed though contributing significantly to the aerosol budget on a continental scale. The uncertainties in the magnitude of the impacts on the hydrological cycle, the radiation budget and the biogeochemical cycles on a continental scale are still noteworthy. The still unknowns on the efficiency of biomass burning aerosol to act as cloud condensation nuclei (CCN) and the effectiveness of the nucleation and impaction scavenging mechanisms in removing them from the atmosphere contribute to such uncertainties. In the present work, the explicit modelling of the early stages of cloud development using a parcel model for the typical conditions of the dry season and dry-to-wet transition periods in Amazonia allowed an estimation of the efficiency of nucleation scavenging process and the ability of South American biomass burning aerosol to act as CCN. Additionally, the impaction scavenging was simulated for the same aerosol population following a method based on the widely used concept of the efficiency of collision between a raindrop and an aerosol particle. DMPS and H-TDMA data available in the literature for biomass burning aerosol population in the region indicated the presence of a nearly hydrophobic fraction (on average, with specific hygroscopic parameter κ=0.04, and relative abundance of 73 %) and nearly hygroscopic fraction (κ=0.13, 27 %), externally mixed. The hygroscopic parameters and relative abundances of each hygroscopic group, as well as the weighted average specific hygroscopic parameter for the entire population κ=0.06, were used in calculations of aerosol activation and population mass and number concentration scavenged by nucleation. Results from both groups of simulations are presented and discussed. This work provides an insight on the importance of the inclusion of these processes in regional/global models. The authors thank the Sao Paulo Research Foundation FAPESP for supporting this work through the projects DR 2012/09934-3 and BEPE-DR 2013/02101-9.
Searching for the doubly charged scalars in the Georgi-Machacek model via γγ collisions at the ILC
NASA Astrophysics Data System (ADS)
Cao, Jun; Li, Yu-Qi; Liu, Yao-Bei
2018-04-01
The Georgi-Machacek (GM) model predicts the existence of the doubly-charged scalars H5±±, which can be seen the typical particles in this model and their diboson decay channels are one of the most promising ways to discover such new doubly-charged scalars. Based on the constraints of the latest combined ATLAS and CMS Higgs boson diphoton signal strength data at 2σ confidence level, we focus on the study of the triple scalar production in γγ collisions at the future International Linear collider (ILC): γγ → hH5++H 5‑‑, where the production cross-sections are very sensitive to the triple scalar coupling parameter ghHH. Considering the typical same-sign diboson decay modes for the doubly-charged scalars, the possible final signals might be detected via this process at the future ILC experiments.
On numerical reconstructions of lithographic masks in DUV scatterometry
NASA Astrophysics Data System (ADS)
Henn, M.-A.; Model, R.; Bär, M.; Wurm, M.; Bodermann, B.; Rathsfeld, A.; Gross, H.
2009-06-01
The solution of the inverse problem in scatterometry employing deep ultraviolet light (DUV) is discussed, i.e. we consider the determination of periodic surface structures from light diffraction patterns. With decreasing dimensions of the structures on photo lithography masks and wafers, increasing demands on the required metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line structures in order to determine the sidewall angles, heights, and critical dimensions (CD), i.e., the top and bottom widths. The latter quantities are typically in the range of tens of nanometers. All these angles, heights, and CDs are the fundamental figures in order to evaluate the quality of the manufacturing process. To measure those quantities a DUV scatterometer is used, which typically operates at a wavelength of 193 nm. The diffraction of light by periodic 2D structures can be simulated using the finite element method for the Helmholtz equation. The corresponding inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Fixing the class of gratings and the set of measurements, this inverse problem reduces to a finite dimensional nonlinear operator equation. Reformulating the problem as an optimization problem, a vast number of numerical schemes can be applied. Our tool is a sequential quadratic programing (SQP) variant of the Gauss-Newton iteration. In a first step, in which we use a simulated data set, we investigate how accurate the geometrical parameters of an EUV mask can be reconstructed, using light in the DUV range. We then determine the expected uncertainties of geometric parameters by reconstructing from simulated input data perturbed by noise representing the estimated uncertainties of input data. In the last step, we use the measurement data obtained from the new DUV scatterometer at PTB to determine the geometrical parameters of a typical EUV mask with our reconstruction algorithm. The results are compared to the outcome of investigations with two alternative methods namely EUV scatterometry and SEM measurements.
Williams, Loriann; Jackson, Carl P T; Choe, Noreen; Pelland, Lucie; Scott, Stephen H; Reynolds, James N
2014-01-01
Fetal alcohol spectrum disorder (FASD) is associated with a large number of cognitive and sensory-motor deficits. In particular, the accurate assessment of sensory-motor deficits in children with FASD is not always simple and relies on clinical assessment tools that may be coarse and subjective. Here we present a new approach: using robotic technology to accurately and objectively assess motor deficits of children with FASD in a center-out reaching task. A total of 152 typically developing children and 31 children with FASD, all aged between 5 and 18 were assessed using a robotic exoskeleton device coupled with a virtual reality projection system. Children made reaching movements to 8 peripheral targets in a random order. Reach trajectories were subsequently analyzed to extract 12 parameters that had been previously determined to be good descriptors of a reaching movement, and these parameters were compared for each child with FASD to a normative model derived from the performance of the typically developing population. Compared with typically developing children, the children with FASD were found to be significantly impaired on most of the parameters measured, with the greatest deficits found in initial movement direction error. Also, children with FASD tended to fail more parameters than typically developing children: 95% of typically developing children failed fewer than 3 parameters compared with 69% of children with FASD. These results were particularly pronounced for younger children. The current study has shown that robotic technology is a sensitive and powerful tool that provides increased specificity regarding the type of motor problems exhibited by children with FASD. The high frequency of motor deficits in children with FASD suggests that interventions aimed at stimulating and/or improving motor development should routinely be considered for this population. Copyright © 2013 by the Research Society on Alcoholism.
Changes in the microbial communities during co-composting of digestates.
Franke-Whittle, Ingrid H; Confalonieri, Alberto; Insam, Heribert; Schlegelmilch, Mirko; Körner, Ina
2014-03-01
Anaerobic digestion is a waste treatment method which is of increasing interest worldwide. At the end of the process, a digestate remains, which can gain added value by being composted. A study was conducted in order to investigate microbial community dynamics during the composting process of a mixture of anaerobic digestate (derived from the anaerobic digestion of municipal food waste), green wastes and a screened compost (green waste/kitchen waste compost), using the COMPOCHIP microarray. The composting process showed a typical temperature development, and the highest degradation rates occurred during the first 14 days of composting, as seen from the elevated CO2 content in the exhaust air. With an exception of elevated nitrite and nitrate levels in the day 34 samples, physical-chemical parameters for all compost samples collected during the 63 day process indicated typical composting conditions. The microbial communities changed over the 63 days of composting. According to principal component analysis of the COMPOCHIP microarray results, compost samples from the start of the experiment were found to cluster most closely with the digestate and screened compost samples. The green waste samples were found to group separately. All starting materials investigated were found to yield fewer and lower signals when compared to the samples collected during the composting experiment. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Finite Element Analysis of Laser Engineered Net Shape (LENS™) Tungsten Clad Squeeze Pins
NASA Astrophysics Data System (ADS)
Sakhuja, Amit; Brevick, Jerald R.
2004-06-01
In the aluminum high-pressure die-casting and indirect squeeze casting processes, local "squeeze" pins are often used to minimize internal solidification shrinkage in heavy casting sections. Squeeze pins frequently fail in service due to molten aluminum adhering to the H13 tool steel pins ("soldering"). A wide variety of coating materials and methods have been developed to minimize soldering on H13. However, these coatings are typically very thin, and experience has shown their performance on squeeze pins is highly variable. The LENS™ process was employed in this research to deposit a relatively thick tungsten cladding on squeeze pins. An advantage of this process was that the process parameters could be precisely controlled in order to produce a satisfactory cladding. Two fixtures were designed and constructed to enable the end and outer diameter (OD) of the squeeze pins to be clad. Analyses were performed on the clad pins to evaluate the microstructure and chemical composition of the tungsten cladding and the cladding-H13 substrate interface. A thermo-mechanical finite element analysis (FEA) was performed to assess the stress distribution as a function of cladding thickness on the pins during a typical casting thermal cycle. FEA results were validated via a physical test, where the clad squeeze pins were immersed into molten aluminum. Pins subjected to the test were evaluated for thermally induced cracking and resistance to soldering of the tungsten cladding.
NASA Astrophysics Data System (ADS)
Martinez, I. A.; Eisenmann, D.
2012-12-01
Ground Penetrating Radar (GPR) has been used for many years in successful subsurface detection of conductive and non-conductive objects in all types of material including different soils and concrete. Typical defect detection is based on subjective examination of processed scans using data collection and analysis software to acquire and analyze the data, often requiring a developed expertise or an awareness of how a GPR works while collecting data. Processing programs, such as GSSI's RADAN analysis software are then used to validate the collected information. Iowa State University's Center for Nondestructive Evaluation (CNDE) has built a test site, resembling a typical levee used near rivers, which contains known sub-surface targets of varying size, depth, and conductivity. Scientist at CNDE have developed software with the enhanced capabilities, to decipher a hyperbola's magnitude and amplitude for GPR signal processing. With this enhanced capability, the signal processing and defect detection capabilities for GPR have the potential to be greatly enhanced. This study will examine the effects of test parameters, antenna frequency (400MHz), data manipulation methods (which include data filters and restricting the range of depth in which the chosen antenna's signal can reach), and real-world conditions using this test site (such as varying weather conditions) , with the goal of improving GPR tests sensitivity for differing soil conditions.
Shi, Xuchuan; Lin, Jia; Zuo, Jiane; Li, Peng; Li, Xiaoxia; Guo, Xianglin
2017-05-01
The effect of free ammonia on volatile fatty acid (VFA) accumulation and process instability was studied using a lab-scale anaerobic digester fed by two typical bio-wastes: fruit and vegetable waste (FVW) and food waste (FW) at 35°C with an organic loading rate (OLR) of 3.0kg VS/(m 3 ·day). The inhibitory effects of free ammonia on methanogenesis were observed due to the low C/N ratio of each substrate (15.6 and 17.2, respectively). A high concentration of free ammonia inhibited methanogenesis resulting in the accumulation of VFAs and a low methane yield. In the inhibited state, acetate accumulated more quickly than propionate and was the main type of accumulated VFA. The co-accumulation of ammonia and VFAs led to an "inhibited steady state" and the ammonia was the main inhibitory substance that triggered the process perturbation. By statistical significance test and VFA fluctuation ratio analysis, the free ammonia inhibition threshold was identified as 45mg/L. Moreover, propionate, iso-butyrate and valerate were determined to be the three most sensitive VFA parameters that were subject to ammonia inhibition. Copyright © 2016. Published by Elsevier B.V.
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND
Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159
A derating method for therapeutic applications of high intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.
2010-05-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND.
Bessonova, O V; Khokhlova, V A; Canney, M S; Bailey, M R; Crum, L A
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.
The extended Kubelka-Munk theory and its application to colloidal systems
NASA Astrophysics Data System (ADS)
Alcaraz de la Osa, R.; Fernández, A.; Gutiérrez, Y.; Ortiz, D.; González, F.; Moreno, F.; Saiz, J. M.
2017-08-01
The use of nanoparticles is spreading in many fields and a frequent way of preparing them is in the form of colloids, whose characterization becomes increasingly important. The spectral reflectance and transmittance curves of such colloids exhibit a strong dependence with the main parameters of the system. By means of a two-flux model we have performed a colorimetric study of gold colloids varying several parameters of the system, including the radius of the particles, the particle number density, the thickness of the system and the refractive index of the surrounding medium. In all cases, trajectories in the L*a*b* color space have been obtained, as well as the evolution of the luminosity, chroma and hue, either for reflectance or transmittance. The observed colors agree well with typical colors found in the literature for colloidal gold, and could allow for a fast assessment of the parameters involved, e.g., the radius of the nanoparticle during the fabrication process.
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
NASA Astrophysics Data System (ADS)
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.
Kollmeier, Birger; Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T; Brand, Thomas
2016-09-07
To characterize the individual patient's hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The "typical" audiogram shapes from Bisgaard et al with or without a "typical" level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. © The Author(s) 2016.
cisTEM, user-friendly software for single-particle image processing.
Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus
2018-03-07
We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.
cisTEM, user-friendly software for single-particle image processing
2018-01-01
We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216
Plasma-gun-assisted field-reversed configuration formation in a conical θ-pinch
NASA Astrophysics Data System (ADS)
Weber, T. E.; Intrator, T. P.; Smith, R. J.
2015-04-01
Injection of plasma via an annular array of coaxial plasma guns during the pre-ionization phase of field-reversed configuration (FRC) formation is shown to catalyze the bulk ionization of a neutral gas prefill in the presence of a strong axial magnetic field and change the character of outward flux flow during field-reversal from a convective process to a much slower resistive diffusion process. This approach has been found to significantly improve FRC formation in a conical θ-pinch, resulting in a ˜350% increase in trapped flux at typical operating conditions, an expansion of accessible formation parameter space to lower densities and higher temperatures, and a reduction or elimination of several deleterious effects associated with the pre-ionization phase.
Williams, Calum; Rughoobur, Girish; Flewitt, Andrew J; Wilkinson, Timothy D
2016-11-10
A single-step fabrication method is presented for ultra-thin, linearly variable optical bandpass filters (LVBFs) based on a metal-insulator-metal arrangement using modified evaporation deposition techniques. This alternate process methodology offers reduced complexity and cost in comparison to conventional techniques for fabricating LVBFs. We are able to achieve linear variation of insulator thickness across a sample, by adjusting the geometrical parameters of a typical physical vapor deposition process. We demonstrate LVBFs with spectral selectivity from 400 to 850 nm based on Ag (25 nm) and MgF2 (75-250 nm). Maximum spectral transmittance is measured at ∼70% with a Q-factor of ∼20.
Integration of local motion is normal in amblyopia
NASA Astrophysics Data System (ADS)
Hess, Robert F.; Mansouri, Behzad; Dakin, Steven C.; Allen, Harriet A.
2006-05-01
We investigate the global integration of local motion direction signals in amblyopia, in a task where performance is equated between normal and amblyopic eyes at the single element level. We use an equivalent noise model to derive the parameters of internal noise and number of samples, both of which we show are normal in amblyopia for this task. This result is in apparent conflict with a previous study in amblyopes showing that global motion processing is defective in global coherence tasks [Vision Res. 43, 729 (2003)]. A similar discrepancy between the normalcy of signal integration [Vision Res. 44, 2955 (2004)] and anomalous global coherence form processing has also been reported [Vision Res. 45, 449 (2005)]. We suggest that these discrepancies for form and motion processing in amblyopia point to a selective problem in separating signal from noise in the typical global coherence task.
Application of laser spot cutting on spring contact probe for semiconductor package inspection
NASA Astrophysics Data System (ADS)
Lee, Dongkyoung; Cho, Jungdon; Kim, Chan Ho; Lee, Seung Hwan
2017-12-01
A packaged semiconductor has to be electrically tested to make sure they are free of any manufacturing defects. The test interface, typically employed between a Printed Circuit Board and the semiconductor devices, consists of densely populated Spring Contact Probe (SCP). A standard SCP typically consists of a plunger, a barrel, and an internal spring. Among these components, plungers are manufactured by a stamping process. After stamping, plunger connecting arms need to be cut into pieces. Currently, mechanical cutting has been used. However, it may damage to the body of plungers due to the mechanical force engaged at the cutting point. Therefore, laser spot cutting is considered to solve this problem. The plunger arm is in the shape of a rectangular beam, 50 μm (H) × 90 μm (W). The plunger material used for this research is gold coated beryllium copper. Laser parameters, such as power and elapsed time, have been selected to study laser spot cutting. Laser material interaction characteristics such as a crater size, material removal zone, ablation depth, ablation threshold, and full penetration are observed. Furthermore, a carefully chosen laser parameter (Etotal = 1000mJ) to test feasibility of laser spot cutting are applied. The result show that laser spot cutting can be applied to cut SCP.
Moro, Erik A; Todd, Michael D; Puckett, Anthony D
2012-09-20
In static tests, low-power (<5 mW) white light extrinsic Fabry-Perot interferometric position sensors offer high-accuracy (μm) absolute measurements of a target's position over large (cm) axial-position ranges, and since position is demodulated directly from phase in the interferogram, these sensors are robust to fluctuations in measured power levels. However, target surface dynamics distort the interferogram via Doppler shifting, introducing a bias in the demodulation process. With typical commercial off-the-shelf hardware, a broadband source centered near 1550 nm, and an otherwise typical setup, the bias may be as large as 50-100 μm for target surface velocities as low as 0.1 mm/s. In this paper, the authors derive a model for this Doppler-induced position bias, relating its magnitude to three swept-filter tuning parameters. Target velocity (magnitude and direction) is calculated using this relationship in conjunction with a phase-diversity approach, and knowledge of the target's velocity is then used to compensate exactly for the position bias. The phase-diversity approach exploits side-by-side measurement signals, transmitted through separate swept filters with distinct tuning parameters, and permits simultaneous measurement of target velocity and target position, thereby mitigating the most fundamental performance limitation that exists on dynamic white light interferometric position sensors.
Encryption for Remote Control via Internet or Intranet
NASA Technical Reports Server (NTRS)
Lineberger, Lewis
2005-01-01
A data-communication protocol has been devised to enable secure, reliable remote control of processes and equipment via a collision-based network, while using minimal bandwidth and computation. The network could be the Internet or an intranet. Control is made secure by use of both a password and a dynamic key, which is sent transparently to a remote user by the controlled computer (that is, the computer, located at the site of the equipment or process to be controlled, that exerts direct control over the process). The protocol functions in the presence of network latency, overcomes errors caused by missed dynamic keys, and defeats attempts by unauthorized remote users to gain control. The protocol is not suitable for real-time control, but is well suited for applications in which control latencies up to about 0.5 second are acceptable. The encryption scheme involves the use of both a dynamic and a private key, without any additional overhead that would degrade performance. The dynamic key is embedded in the equipment- or process-monitor data packets sent out by the controlled computer: in other words, the dynamic key is a subset of the data in each such data packet. The controlled computer maintains a history of the last 3 to 5 data packets for use in decrypting incoming control commands. In addition, the controlled computer records a private key (password) that is given to the remote computer. The encrypted incoming command is permuted by both the dynamic and private key. A person who records the command data in a given packet for hostile purposes cannot use that packet after the public key expires (typically within 3 seconds). Even a person in possession of an unauthorized copy of the command/remote-display software cannot use that software in the absence of the password. The use of a dynamic key embedded in the outgoing data makes the central-processing unit overhead very small. The use of a National Instruments DataSocket(TradeMark) (or equivalent) protocol or the User Datagram Protocol makes it possible to obtain reasonably short response times: Typical response times in event-driven control, using packets sized .300 bytes, are <0.2 second for commands issued from locations anywhere on Earth. The protocol requires that control commands represent absolute values of controlled parameters (e.g., a specified temperature), as distinguished from changes in values of controlled parameters (e.g., a specified increment of temperature). Each command is issued three or more times to ensure delivery in crowded networks. The use of absolute-value commands prevents additional (redundant) commands from causing trouble. Because a remote controlling computer receives "talkback" in the form of data packets from the controlled computer, typically within a time interval < or =1 s, the controlling computer can re-issue a command if network failure has occurred. The controlled computer, the process or equipment that it controls, and any human operator(s) at the site of the controlled equipment or process should be equipped with safety measures to prevent damage to equipment or injury to humans. These features could be a combination of software, external hardware, and intervention by the human operator(s). The protocol is not fail-safe, but by adopting these safety measures as part of the protocol, one makes the protocol a robust means of controlling remote processes and equipment by use of typical office computers via intranets and/or the Internet.
SYNCHROTRON ORIGIN OF THE TYPICAL GRB BAND FUNCTION—A CASE STUDY OF GRB 130606B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Bin-Bin; Briggs, Michael S.; Uhm, Z. Lucas
2016-01-10
We perform a time-resolved spectral analysis of GRB 130606B within the framework of a fast-cooling synchrotron radiation model with magnetic field strength in the emission region decaying with time, as proposed by Uhm and Zhang. The data from all time intervals can be successfully fit by the model. The same data can be equally well fit by the empirical Band function with typical parameter values. Our results, which involve only minimal physical assumptions, offer one natural solution to the origin of the observed GRB spectra and imply that, at least some, if not all, Band-like GRB spectra with typical Bandmore » parameter values can indeed be explained by synchrotron radiation.« less
Kralisch, Dana; Streckmann, Ina; Ott, Denise; Krtschil, Ulich; Santacesaria, Elio; Di Serio, Martino; Russo, Vincenzo; De Carlo, Lucrezia; Linhart, Walter; Christian, Engelbert; Cortese, Bruno; de Croon, Mart H J M; Hessel, Volker
2012-02-13
The simple transfer of established chemical production processes from batch to flow chemistry does not automatically result in more sustainable ones. Detailed process understanding and the motivation to scrutinize known process conditions are necessary factors for success. Although the focus is usually "only" on intensifying transport phenomena to operate under intrinsic kinetics, there is also a large intensification potential in chemistry under harsh conditions and in the specific design of flow processes. Such an understanding and proposed processes are required at an early stage of process design because decisions on the best-suited tools and parameters required to convert green engineering concepts into practice-typically with little chance of substantial changes later-are made during this period. Herein, we present a holistic and interdisciplinary process design approach that combines the concept of novel process windows with process modeling, simulation, and simplified cost and lifecycle assessment for the deliberate development of a cost-competitive and environmentally sustainable alternative to an existing production process for epoxidized soybean oil. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xiao, X; Bai, B; Xu, N; Wu, K
2015-04-01
Oversegmentation is a major drawback of the morphological watershed algorithm. Here, we study and reveal that the oversegmentation is not only because of the irregular shapes of the particle images, which people are familiar with, but also because of some particles, such as ellipses, with more than one centre. A new parameter, the striping level, is introduced and the criterion for striping parameter is built to help find the right markers prior to segmentation. An adaptive striping watershed algorithm is established by applying a procedure, called the marker searching algorithm, to find the markers, which can effectively suppress the oversegmentation. The effectiveness of the proposed method is validated by analysing some typical particle images including the images of gold nanorod ensembles. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
Preserved reward outcome processing in ASD as revealed by event-related potentials.
McPartland, James C; Crowley, Michael J; Perszyk, Danielle R; Mukerji, Cora E; Naples, Adam J; Wu, Jia; Mayes, Linda C
2012-05-31
Problems with reward system function have been posited as a primary difficulty in autism spectrum disorders. The current study examined an electrophysiological marker of feedback monitoring, the feedback-related negativity (FRN), during a monetary reward task. The study advanced prior understanding by focusing exclusively on a developmental sample, applying rigorous diagnostic characterization and introducing an experimental paradigm providing more subtly different feedback valence (reward versus non-reward instead of reward versus loss). Twenty-six children with autism spectrum disorder and 28 typically developing peers matched on age and full-scale IQ played a guessing game resulting in monetary gain ("win") or neutral outcome ("draw"). ERP components marking early visual processing (N1, P2) and feedback appraisal (FRN) were contrasted between groups in each condition, and their relationships to behavioral measures of social function and dysfunction, social anxiety, and autism symptomatology were explored. FRN was observed on draw trials relative to win trials. Consistent with prior research, children with ASD exhibited a FRN to suboptimal outcomes that was comparable to typical peers. ERP parameters were unrelated to behavioral measures. Results of the current study indicate typical patterns of feedback monitoring in the context of monetary reward in ASD. The study extends prior findings of normative feedback monitoring to a sample composed exclusively of children and demonstrates that, as in typical development, individuals with autism exhibit a FRN to suboptimal outcomes, irrespective of neutral or negative valence. Results do not support a pervasive problem with reward system function in ASD, instead suggesting any dysfunction lies in more specific domains, such as social perception, or in response to particular feedback-monitoring contexts, such as self-evaluation of one's errors.
Three-Dimensional Imaging of the Mouse Organ of Corti Cytoarchitecture for Mechanical Modeling
NASA Astrophysics Data System (ADS)
Puria, Sunil; Hartman, Byron; Kim, Jichul; Oghalai, John S.; Ricci, Anthony J.; Liberman, M. Charles
2011-11-01
Cochlear models typically use continuous anatomical descriptions and homogenized parameters based on two-dimensional images for describing the organ of Corti. To produce refined models based more closely on the actual cochlear cytoarchitecture, three-dimensional morphometric parameters of key mechanical structures are required. Towards this goal, we developed and compared three different imaging methods: (1) A fixed cochlear whole-mount preparation using the fluorescent dye Cellmask®, which is a molecule taken up by cell membranes and clearly delineates Deiters' cells, outer hair cells, and the phalangeal process, imaged using confocal microscopy; (2) An in situ fixed preparation with hair cells labeled using anti-prestin and supporting structures labeled using phalloidin, imaged using two-photon microscopy; and (3) A membrane-tomato (mT) mouse with fluorescent proteins expressed in all cell membranes, which enables two-photon imaging of an in situ live preparation with excellent visualization of the organ of Corti. Morphometric parameters including lengths, diameters, and angles, were extracted from 3D cellular surface reconstructions of the resulting images. Preliminary results indicate that the length of the phalangeal processes decreases from the first (inner most) to third (outer most) row of outer hair cells, and that their length also likely varies from base to apex and across species.
NASA Astrophysics Data System (ADS)
Schulze, Martin H.; Heuer, Henning
2012-04-01
Carbon fiber based materials are used in many lightweight applications in aeronautical, automotive, machine and civil engineering application. By the increasing automation in the production process of CFRP laminates a manual optical inspection of each resin transfer molding (RTM) layer is not practicable. Due to the limitation to surface inspection, the quality parameters of multilayer 3 dimensional materials cannot be observed by optical systems. The Imaging Eddy- Current (EC) NDT is the only suitable inspection method for non-resin materials in the textile state that allows an inspection of surface and hidden layers in parallel. The HF-ECI method has the capability to measure layer displacements (misaligned angle orientations) and gap sizes in a multilayer carbon fiber structure. EC technique uses the variation of the electrical conductivity of carbon based materials to obtain material properties. Beside the determination of textural parameters like layer orientation and gap sizes between rovings, the detection of foreign polymer particles, fuzzy balls or visualization of undulations can be done by the method. For all of these typical parameters an imaging classification process chain based on a high resolving directional ECimaging device named EddyCus® MPECS and a 2D-FFT with adapted preprocessing algorithms are developed.
Measurement accuracy of a stressed contact lens during its relaxation period
NASA Astrophysics Data System (ADS)
Compertore, David C.; Ignatovich, Filipp V.
2018-02-01
We examine the dioptric power and transmitted wavefront of a contact lens as it releases its handling stresses. Handling stresses are introduced as part of the contact lens loading process and are common across all contact lens measurement procedures and systems. The latest advances in vision correction require tighter quality control during the manufacturing of the contact lenses. The optical power of contact lenses is one of the critical characteristics for users. Power measurements are conducted in the hydrated state, where the lens is resting inside a solution-filled glass cuvette. In a typical approach, the contact lens must be subject to long settling times prior to any measurements. Alternatively, multiple measurements must be averaged. Apart from potential operator dependency of such approach, it is extremely time-consuming, and therefore it precludes higher rates of testing. Comprehensive knowledge about the settling process can be obtained by monitoring multiple parameters of the lens simultaneously. We have developed a system that combines co-aligned a Shack-Hartmann transmitted wavefront sensor and a time-domain low coherence interferometer to measure several optical and physical parameters (power, cylinder power, aberrations, center thickness, sagittal depth, and diameter) simultaneously. We monitor these parameters during the stress relaxation period and show correlations that can be used by manufacturers to devise methods for improved quality control procedures.
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth
Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai
2017-01-01
State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production. PMID:28848565
A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.
Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai
2017-01-01
State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production.
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
Bari, Quazi H; Koenig, Albert
2012-11-01
The aeration rate is a key process control parameter in the forced aeration composting process because it greatly affects different physico-chemical parameters such as temperature and moisture content, and indirectly influences the biological degradation rate. In this study, the effect of a constant airflow rate on vertical temperature distribution and organic waste degradation in the composting mass is analyzed using a previously developed mathematical model of the composting process. The model was applied to analyze the effect of two different ambient conditions, namely, hot and cold ambient condition, and four different airflow rates such as 1.5, 3.0, 4.5, and 6.0 m(3) m(-2) h(-1), respectively, on the temperature distribution and organic waste degradation in a given waste mixture. The typical waste mixture had 59% moisture content and 96% volatile solids, however, the proportion could be varied as required. The results suggested that the model could be efficiently used to analyze composting under variable ambient and operating conditions. A lower airflow rate around 1.5-3.0 m(3) m(-2) h(-1) was found to be suitable for cold ambient condition while a higher airflow rate around 4.5-6.0 m(3) m(-2) h(-1) was preferable for hot ambient condition. The engineered way of application of this model is flexible which allows the changes in any input parameters within the realistic range. It can be widely used for conceptual process design, studies on the effect of ambient conditions, optimization studies in existing composting plants, and process control. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ecophysiological parameters for Pacific Northwest trees.
Amy E. Hessl; Cristina Milesi; Michael A. White; David L. Peterson; Robert E. Keane
2004-01-01
We developed a species- and location-specific database of published ecophysiological variables typically used as input parameters for biogeochemical models of coniferous and deciduous forested ecosystems in the Western United States. Parameters are based on the requirements of Biome-BGC, a widely used biogeochemical model that was originally parameterized for the...
KEWPIE2: A cascade code for the study of dynamical decay of excited nuclei
NASA Astrophysics Data System (ADS)
Lü, Hongliang; Marchix, Anthony; Abe, Yasuhisa; Boilley, David
2016-03-01
KEWPIE-a cascade code devoted to investigating the dynamical decay of excited nuclei, specially designed for treating very low probability events related to the synthesis of super-heavy nuclei formed in fusion-evaporation reactions-has been improved and rewritten in C++ programming language to become KEWPIE2. The current version of the code comprises various nuclear models concerning the light-particle emission, fission process and statistical properties of excited nuclei. General features of the code, such as the numerical scheme and the main physical ingredients, are described in detail. Some typical calculations having been performed in the present paper clearly show that theoretical predictions are generally in accordance with experimental data. Furthermore, since the values of some input parameters cannot be determined neither theoretically nor experimentally, a sensibility analysis is presented. To this end, we systematically investigate the effects of using different parameter values and reaction models on the final results. As expected, in the case of heavy nuclei, the fission process has the most crucial role to play in theoretical predictions. This work would be essential for numerical modeling of fusion-evaporation reactions.
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.
NASA Astrophysics Data System (ADS)
McNamara, J. P.; Semenova, O.; Restrepo, P. J.
2011-12-01
Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in terms of runoff and variable states of soil and snow over a simulation period 2000 - 2009. The parameters of the model were hand-adjusted based on rational sense, observational data and available understanding of underlying processes. For the first run some processes as riparian vegetation impact on runoff and streamflow/groundwater interaction were handled in a conceptual way. It was shown that the use of Hydrograph model which requires modest amount of parameter calibration may serve also as a quality control for observations. Based on the obtained parameters values and process understanding at the research watershed the model was applied to the larger scale watersheds located in similar environment - the Boise River at South Fork (1660 km2) and Twin Springs (2155 km2). The evaluation of the results of such upscaling will be presented.
NASA Technical Reports Server (NTRS)
Schwan, Karsten
1994-01-01
Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A
A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]).more » In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors« less
Melo, Roberta Michelon; Mota, Helena Bolli; Berti, Larissa Cristina
2017-06-08
This study used acoustic and articulatory analyses to characterize the contrast between alveolar and velar stops with typical speech data, comparing the parameters (acoustic and articulatory) of adults and children with typical speech development. The sample consisted of 20 adults and 15 children with typical speech development. The analyzed corpus was organized through five repetitions of each target-word (/'kap ə/, /'tapə/, /'galo/ e /'daɾə/). These words were inserted into a carrier phrase and the participant was asked to name them spontaneously. Simultaneous audio and video data were recorded (tongue ultrasound images). The data was submitted to acoustic analyses (voice onset time; spectral peak and burst spectral moments; vowel/consonant transition and relative duration measures) and articulatory analyses (proportion of significant axes of the anterior and posterior tongue regions and description of tongue curves). Acoustic and articulatory parameters were effective to indicate the contrast between alveolar and velar stops, mainly in the adult group. Both speech analyses showed statistically significant differences between the two groups. The acoustic and articulatory parameters provided signals to characterize the phonic contrast of speech. One of the main findings in the comparison between adult and child speech was evidence of articulatory refinement/maturation even after the period of segment acquisition.
VESGEN Software for Mapping and Quantification of Vascular Regulators
NASA Technical Reports Server (NTRS)
Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.
2012-01-01
VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.
Decadal water quality variations at three typical basins of Mekong, Murray and Yukon
NASA Astrophysics Data System (ADS)
Khan, Afed U.; Jiang, Jiping; Wang, Peng
2018-02-01
Decadal distribution of water quality parameters is essential for surface water management. Decadal distribution analysis was conducted to assess decadal variations in water quality parameters at three typical watersheds of Murray, Mekong and Yukon. Right distribution shifts were observed for phosphorous and nitrogen parameters at the Mekong watershed monitoring sites while left shifts were noted at the Murray and Yukon monitoring sites. Nutrients pollution increases with time at the Mekong watershed while decreases at the Murray and Yukon watershed monitoring stations. The results implied that watershed located in densely populated developing area has higher risk of water quality deterioration in comparison to thinly populated developed area. The present study suggests best management practices at watershed scale to modulate water pollution.
Does the cognitive reflection test measure cognitive reflection? A mathematical modeling approach.
Campitelli, Guillermo; Gerrans, Paul
2014-04-01
We used a mathematical modeling approach, based on a sample of 2,019 participants, to better understand what the cognitive reflection test (CRT; Frederick In Journal of Economic Perspectives, 19, 25-42, 2005) measures. This test, which is typically completed in less than 10 min, contains three problems and aims to measure the ability or disposition to resist reporting the response that first comes to mind. However, since the test contains three mathematically based problems, it is possible that the test only measures mathematical abilities, and not cognitive reflection. We found that the models that included an inhibition parameter (i.e., the probability of inhibiting an intuitive response), as well as a mathematical parameter (i.e., the probability of using an adequate mathematical procedure), fitted the data better than a model that only included a mathematical parameter. We also found that the inhibition parameter in males is best explained by both rational thinking ability and the disposition toward actively open-minded thinking, whereas in females this parameter was better explained by rational thinking only. With these findings, this study contributes to the understanding of the processes involved in solving the CRT, and will be particularly useful for researchers who are considering using this test in their research.
On robust parameter estimation in brain-computer interfacing
NASA Astrophysics Data System (ADS)
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
Precision ephemerides for gravitational-wave searches - III. Revised system parameters of Sco X-1
NASA Astrophysics Data System (ADS)
Wang, L.; Steeghs, D.; Galloway, D. K.; Marsh, T.; Casares, J.
2018-06-01
Neutron stars in low-mass X-ray binaries are considered promising candidate sources of continuous gravitational-waves. These neutron stars are typically rotating many hundreds of times a second. The process of accretion can potentially generate and support non-axisymmetric distortions to the compact object, resulting in persistent emission of gravitational-waves. We present a study of existing optical spectroscopic data for Sco X-1, a prime target for continuous gravitational-wave searches, with the aim of providing revised constraints on key orbital parameters required for a directed search with advanced-LIGO data. From a circular orbit fit to an improved radial velocity curve of the Bowen emission components, we derived an updated orbital period and ephemeris. Centre of symmetry measurements from the Bowen Doppler tomogram yield a centre of the disc component of 90 km s-1, which we interpret as a revised upper limit to the projected orbital velocity of the NS K1. By implementing Monte Carlo binary parameter calculations, and imposing new limits on K1 and the rotational broadening, we obtained a complete set of dynamical system parameter constraints including a new range for K1 of 40-90 km s-1. Finally, we discussed the implications of the updated orbital parameters for future continuous-waves searches.
NASA Astrophysics Data System (ADS)
Preuss, R.
2014-12-01
This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system
Thermal homogeneity of plastication processes in single-screw extruders
NASA Astrophysics Data System (ADS)
Bu, L. X.; Agbessi, Y.; Béreaux, Y.; Charmeau, J.-Y.
2018-05-01
Single-screw plastication, used in extrusion and in injection moulding, is a major way of processing commodity thermoplastics. During the plastication phase, the polymeric material is melted by the combined effects of shear-induced self-heating (viscous dissipation) and heat conduction coming from the barrel. In injection moulding, a high level of reliability is usually achieved that makes this process ideally suited to mass market production. Nonetheless, process fluctuations still appear that make moulded part quality control an everyday issue. In this work, we used a combined modelling of plastication, throughput calculation and laminar dispersion, to investigate if, and how, thermal fluctuations could propagate along the screw length and affect the melt homogeneity at the end of the metering section. To do this, we used plastication models to relate changes in processing parameters to changes in the plastication length. Moreover, a simple model of throughput calculation is used to relate the screw geometry, the polymer rheology and the processing parameters to get a good estimate of the mass flow rate. Hence, we found that the typical residence time in a single screw is around one tenth of the thermal diffusion time scale. This residence time is too short for the dispersion coefficient to reach a steady state, but too long to be able to neglect radial thermal diffusion and resort to a purely convective solution. Therefore, a full diffusion/convection problem has to be solved with a base flow described by the classic pressure and drag velocity field. Preliminary results already show the major importance of the processing parameters in the breakthrough curve of an arbitrary temperature fluctuation at the end of the metering section of injection moulding screw. When the flow back-pressure is high, the temperature fluctuation is spread more evenly with time, whereas a pressure drop in the flow will results in a breakthrough curve which presents a larger peak of fluctuation.
Digital transceiver implementation for wavelet packet modulation
NASA Astrophysics Data System (ADS)
Lindsey, Alan R.; Dill, Jeffrey C.
1998-03-01
Current transceiver designs for wavelet-based communication systems are typically reliant on analog waveform synthesis, however, digital processing is an important part of the eventual success of these techniques. In this paper, a transceiver implementation is introduced for the recently introduced wavelet packet modulation scheme which moves the analog processing as far as possible toward the antenna. The transceiver is based on the discrete wavelet packet transform which incorporates level and node parameters for generalized computation of wavelet packets. In this transform no particular structure is imposed on the filter bank save dyadic branching, and a maximum level which is specified a priori and dependent mainly on speed and/or cost considerations. The transmitter/receiver structure takes a binary sequence as input and, based on the desired time- frequency partitioning, processes the signal through demultiplexing, synthesis, analysis, multiplexing and data determination completely in the digital domain - with exception of conversion in and out of the analog domain for transmission.
Rapid Thermal Processing to Enhance Steel Toughness.
Judge, V K; Speer, J G; Clarke, K D; Findley, K O; Clarke, A J
2018-01-11
Quenching and Tempering (Q&T) has been utilized for decades to alter steel mechanical properties, particularly strength and toughness. While tempering typically increases toughness, a well-established phenomenon called tempered martensite embrittlement (TME) is known to occur during conventional Q&T. Here we show that short-time, rapid tempering can overcome TME to produce unprecedented property combinations that cannot be attained by conventional Q&T. Toughness is enhanced over 43% at a strength level of 1.7 GPa and strength is improved over 0.5 GPa at an impact toughness of 30 J. We also show that hardness and the tempering parameter (TP), developed by Holloman and Jaffe in 1945 and ubiquitous within the field, is insufficient for characterizing measured strengths, toughnesses, and microstructural conditions after rapid processing. Rapid tempering by energy-saving manufacturing processes like induction heating creates the opportunity for new Q&T steels for energy, defense, and transportation applications.
Bounds on the Coupling of the Majoron to Light Neutrinos from Supernova Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farzan, Yasaman
2002-12-02
We explore the role of Majoron (J) emission in the supernova cooling process, as a source of upper bound on the neutrino-Majoron coupling. We show that the strongest upper bound on the coupling to {nu}{sub 3} comes from the {nu}{sub e}{nu}{sub e} {yields} J process in the core of a supernova. We also find bounds on diagonal couplings of the Majoron to {nu}{sub {mu}({tau})}{nu}{sub {mu}({tau})} and on off-diagonal {nu}{sub e}{nu}{sub {mu}({tau})} couplings in various regions of the parameter space. We discuss the evaluation of cross-section for four-particle interactions ({nu}{nu} {yields} JJ and {nu}J {yields} {nu}J). We show that these aremore » typically dominated by three-particle sub-processes and do not give new independent constraints.« less
Göhler, Daniel; Stintz, Michael; Hillemann, Lars; Vorbau, Manuel
2010-01-01
Nanoparticles are used in industrial and domestic applications to control customized product properties. But there are several uncertainties concerning possible hazard to health safety and environment. Hence, it is necessary to search for methods to analyze the particle release from typical application processes. Based on a survey of commercial sanding machines, the relevant sanding process parameters were employed for the design of a miniature sanding test setup in a particle-free environment for the quantification of the nanoparticle release into air from surface coatings. The released particles were moved by a defined airflow to a fast mobility particle sizer and other aerosol measurement equipment to enable the determination of released particle numbers additionally to the particle size distribution. First, results revealed a strong impact of the coating material on the swarf mass and the number of released particles. PMID:20696941
NASA Technical Reports Server (NTRS)
Domack, Marcia S.; Taminger, Karen M. B.; Begley, Matthew
2006-01-01
The electron beam freeform fabrication (EBF3) layer-additive manufacturing process has been developed to directly fabricate complex geometry components. EBF3 introduces metal wire into a molten pool created on the surface of a substrate by a focused electron beam. Part geometry is achieved by translating the substrate with respect to the beam to build the part one layer at a time. Tensile properties have been demonstrated for electron beam deposited aluminum and titanium alloys that are comparable to wrought products, although the microstructures of the deposits exhibit features more typical of cast material. Understanding the metallurgical mechanisms controlling mechanical properties is essential to maximizing application of the EBF3 process. In the current study, mechanical properties and resulting microstructures were examined for aluminum alloy 2219 fabricated over a range of EBF3 process variables. Material performance was evaluated based on tensile properties and results were compared with properties of Al 2219 wrought products. Unique microstructures were observed within the deposited layers and at interlayer boundaries, which varied within the deposit height due to microstructural evolution associated with the complex thermal history experienced during subsequent layer deposition. Microstructures exhibited irregularly shaped grains, typically with interior dendritic structures, which were described based on overall grain size, morphology, distribution, and dendrite spacing, and were correlated with deposition parameters. Fracture features were compared with microstructural elements to define fracture paths and aid in definition of basic processing-microstructure-property correlations.
Jeong, Chanyoung; Choi, Chang-Hwan
2012-02-01
Conventional electrochemical anodizing processes of metals such as aluminum typically produce planar and homogeneous nanopore structures. If hydrophobically treated, such 2D planar and interconnected pore structures typically result in lower contact angle and larger contact angle hysteresis than 3D disconnected pillar structures and, hence, exhibit inferior superhydrophobic efficiency. In this study, we demonstrate for the first time that the anodizing parameters can be engineered to design novel pillar-on-pore (POP) hybrid nanostructures directly in a simple one-step fabrication process so that superior surface superhydrophobicity can also be realized effectively from the electrochemical anodization process. On the basis of the characteristic of forming a self-ordered porous morphology in a hexagonal array, the modulation of anodizing voltage and duration enabled the formulation of the hybrid-type nanostructures having controlled pillar morphology on top of a porous layer in both mild and hard anodization modes. The hybrid nanostructures of the anodized metal oxide layer initially enhanced the surface hydrophilicity significantly (i.e., superhydrophilic). However, after a hydrophobic monolayer coating, such hybrid nanostructures then showed superior superhydrophobic nonwetting properties not attainable by the plain nanoporous surfaces produced by conventional anodization conditions. The well-regulated anodization process suggests that electrochemical anodizing can expand its usefulness and efficacy to render various metallic substrates with great superhydrophilicity or -hydrophobicity by directly realizing pillar-like structures on top of a self-ordered nanoporous array through a simple one-step fabrication procedure.
On the Utility of the Molecular Oxygen Dayglow Emissions as Proxies for Middle Atmospheric Ozone
NASA Technical Reports Server (NTRS)
Mlynczak, Martin G.; Olander, Daphne S.
1995-01-01
Molecular oxygen dayglow emissions arise in part from processes related to the Hartley band photolysis of ozone. It is therefore possible to derive daytime ozone concentrations from measurements of the volume emission rate of either dayglow. The accuracy to which the ozone concentration can be inferred depends on the accuracy to which numerous kinetic and spectroscopic rate constants are known, including rates which describe the excitation of molecular oxygen by processes that are not related to the ozone concentration. We find that several key rate constants must be known to better than 7 percent accuracy in order to achieve an inferred ozone concentration accurate to 15 percent from measurements of either dayglow. Currently, accuracies for various parameters typically range from 5 to 100 percent.
Plasma-gun-assisted field-reversed configuration formation in a conical θ-pinch
Weber, T. E.; Intrator, T. P.; Smith, R. J.
2015-04-29
We show through injection of plasma via an annular array of coaxial plasma guns, during the pre-ionization phase of field-reversed configuration (FRC) formation how to catalyze the bulk ionization of a neutral gas prefill in the presence of a strong axial magnetic field and change the character of outward flux flow during field-reversal from a convective process to a much slower resistive diffusion process. Our approach has been found to significantly improve FRC formation in a conical θ-pinch, resulting in a ~350% increase in trapped flux at typical operating conditions, an expansion of accessible formation parameter space to lower densitiesmore » and higher temperatures, and a reduction or elimination of several deleterious effects associated with the pre-ionization phase.« less
Plasma-gun-assisted field-reversed configuration formation in a conical θ-pinch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, T. E., E-mail: tweber@lanl.gov; Intrator, T. P.; Smith, R. J.
2015-04-15
Injection of plasma via an annular array of coaxial plasma guns during the pre-ionization phase of field-reversed configuration (FRC) formation is shown to catalyze the bulk ionization of a neutral gas prefill in the presence of a strong axial magnetic field and change the character of outward flux flow during field-reversal from a convective process to a much slower resistive diffusion process. This approach has been found to significantly improve FRC formation in a conical θ-pinch, resulting in a ∼350% increase in trapped flux at typical operating conditions, an expansion of accessible formation parameter space to lower densities and highermore » temperatures, and a reduction or elimination of several deleterious effects associated with the pre-ionization phase.« less
Data acquisition and analysis in the DOE/NASA Wind Energy Program
NASA Technical Reports Server (NTRS)
Neustadter, H. E.
1980-01-01
Four categories of data systems, each responding to a distinct information need are presented. The categories are: control, technology, engineering and performance. The focus is on the technology data system which consists of the following elements: sensors which measure critical parameters such as wind speed and direction, output power, blade loads and strains, and tower vibrations; remote multiplexing units (RMU) mounted on each wind turbine which frequency modulate, multiplex and transmit sensor outputs; the instrumentation available to record, process and display these signals; and centralized computer analysis of data. The RMU characteristics and multiplexing techniques are presented. Data processing is illustrated by following a typical signal through instruments such as the analog tape recorder, analog to digital converter, data compressor, digital tape recorder, video (CRT) display, and strip chart recorder.
Diagnostic and prognostic role of semantic processing in preclinical Alzheimer's disease.
Venneri, Annalena; Jahn-Carta, Caroline; Marco, Matteo De; Quaranta, Davide; Marra, Camillo
2018-06-13
Relatively spared during most of the timeline of normal aging, semantic memory shows a subtle yet measurable decline even during the pre-clinical stage of Alzheimer's disease. This decline is thought to reflect early neurofibrillary changes and impairment is detectable using tests of language relying on lexical-semantic abilities. A promising approach is the characterization of semantic parameters such as typicality and age of acquisition of words, and propositional density from verbal output. Seminal research like the Nun Study or the analysis of the linguistic decline of famous writers and politicians later diagnosed with Alzheimer's disease supports the early diagnostic value of semantic processing and semantic memory. Moreover, measures of these skills may play an important role for the prognosis of patients with mild cognitive impairment.
Numerical quasi-linear study of the critical ionization velocity phenomenon
NASA Technical Reports Server (NTRS)
Moghaddam-Taaheri, E.; Goertz, C. K.
1993-01-01
The critical ionization velocity (CIV) for a neutral barium (Ba) gas cloud moving across the static magnetic field is studied numerically using quasi-linear equations and a parameter range which is typical for the shaped-charge Ba gas release experiments in space. For consistency the charge exchange between the background oxygen ions and neutral atoms and its reverse process, as well as the excitation of the neutral Ba atoms, are included. The numerical results indicate that when the ionization rate due to CIV becomes comparable to the charge exchange rate the energy lost to the ionization and excitation collisions by the superthermal electrons exceeds the energy gain from the waves that are excited by the ion beam. This results in a CIV yield less than the yield by the charge exchange process.
Handling Input and Output for COAMPS
NASA Technical Reports Server (NTRS)
Fitzpatrick, Patrick; Tran, Nam; Li, Yongzuo; Anantharaj, Valentine
2007-01-01
Two suites of software have been developed to handle the input and output of the Coupled Ocean Atmosphere Prediction System (COAMPS), which is a regional atmospheric model developed by the Navy for simulating and predicting weather. Typically, the initial and boundary conditions for COAMPS are provided by a flat-file representation of the Navy s global model. Additional algorithms are needed for running the COAMPS software using global models. One of the present suites satisfies this need for running COAMPS using the Global Forecast System (GFS) model of the National Oceanic and Atmospheric Administration. The first step in running COAMPS downloading of GFS data from an Internet file-transfer-protocol (FTP) server computer of the National Centers for Environmental Prediction (NCEP) is performed by one of the programs (SSC-00273) in this suite. The GFS data, which are in gridded binary (GRIB) format, are then changed to a COAMPS-compatible format by another program in the suite (SSC-00278). Once a forecast is complete, still another program in the suite (SSC-00274) sends the output data to a different server computer. The second suite of software (SSC- 00275) addresses the need to ingest up-to-date land-use-and-land-cover (LULC) data into COAMPS for use in specifying typical climatological values of such surface parameters as albedo, aerodynamic roughness, and ground wetness. This suite includes (1) a program to process LULC data derived from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Terra and Aqua satellites, (2) programs to derive new climatological parameters for the 17-land-use-category MODIS data; and (3) a modified version of a FORTRAN subroutine to be used by COAMPS. The MODIS data files are processed to reformat them into a compressed American Standard Code for Information Interchange (ASCII) format used by COAMPS for efficient processing.
Experimental discrimination of ion stopping models near the Bragg peak in highly ionized matter.
Cayzac, W; Frank, A; Ortner, A; Bagnoud, V; Basko, M M; Bedacht, S; Bläser, C; Blažević, A; Busold, S; Deppert, O; Ding, J; Ehret, M; Fiala, P; Frydrych, S; Gericke, D O; Hallo, L; Helfrich, J; Jahn, D; Kjartansson, E; Knetsch, A; Kraus, D; Malka, G; Neumann, N W; Pépitone, K; Pepler, D; Sander, S; Schaumann, G; Schlegel, T; Schroeter, N; Schumacher, D; Seibert, M; Tauschwitz, An; Vorberger, J; Wagner, F; Weih, S; Zobus, Y; Roth, M
2017-06-01
The energy deposition of ions in dense plasmas is a key process in inertial confinement fusion that determines the α-particle heating expected to trigger a burn wave in the hydrogen pellet and resulting in high thermonuclear gain. However, measurements of ion stopping in plasmas are scarce and mostly restricted to high ion velocities where theory agrees with the data. Here, we report experimental data at low projectile velocities near the Bragg peak, where the stopping force reaches its maximum. This parameter range features the largest theoretical uncertainties and conclusive data are missing until today. The precision of our measurements, combined with a reliable knowledge of the plasma parameters, allows to disprove several standard models for the stopping power for beam velocities typically encountered in inertial fusion. On the other hand, our data support theories that include a detailed treatment of strong ion-electron collisions.
Micromotion-enabled improvement of quantum logic gates with trapped ions
NASA Astrophysics Data System (ADS)
Bermudez, Alejandro; Schindler, Philipp; Monz, Thomas; Blatt, Rainer; Müller, Markus
2017-11-01
The micromotion of ion crystals confined in Paul traps is usually considered an inconvenient nuisance, and is thus typically minimized in high-precision experiments such as high-fidelity quantum gates for quantum information processing (QIP). In this work, we introduce a particular scheme where this behavior can be reversed, making micromotion beneficial for QIP. We show that using laser-driven micromotion sidebands, it is possible to engineer state-dependent dipole forces with a reduced effect of off-resonant couplings to the carrier transition. This allows one, in a certain parameter regime, to devise entangling gate schemes based on geometric phase gates with both a higher speed and a lower error, which is attractive in light of current efforts towards fault-tolerant QIP. We discuss the prospects of reaching the parameters required to observe this micromotion-enabled improvement in experiments with current and future trap designs.
Denis-Alpizar, Otoniel; Bemish, Raymond J; Meuwly, Markus
2017-03-21
Vibrational energy relaxation (VER) of diatomics following collisions with the surrounding medium is an important elementary process for modeling high-temperature gas flow. VER is characterized by two parameters: the vibrational relaxation time τ vib and the state relaxation rates. Here the vibrational relaxation of CO(ν=0←ν=1) in Ar is considered for validating a computational approach to determine the vibrational relaxation time parameter (pτ vib ) using an accurate, fully dimensional potential energy surface. For lower temperatures, comparison with experimental data shows very good agreement whereas at higher temperatures (up to 25 000 K), comparisons with an empirically modified model due to Park confirm its validity for CO in Ar. Additionally, the calculations provide insight into the importance of Δν>1 transitions that are ignored in typical applications of the Landau-Teller framework.
Experimental discrimination of ion stopping models near the Bragg peak in highly ionized matter
NASA Astrophysics Data System (ADS)
Cayzac, W.; Frank, A.; Ortner, A.; Bagnoud, V.; Basko, M. M.; Bedacht, S.; Bläser, C.; Blažević, A.; Busold, S.; Deppert, O.; Ding, J.; Ehret, M.; Fiala, P.; Frydrych, S.; Gericke, D. O.; Hallo, L.; Helfrich, J.; Jahn, D.; Kjartansson, E.; Knetsch, A.; Kraus, D.; Malka, G.; Neumann, N. W.; Pépitone, K.; Pepler, D.; Sander, S.; Schaumann, G.; Schlegel, T.; Schroeter, N.; Schumacher, D.; Seibert, M.; Tauschwitz, An.; Vorberger, J.; Wagner, F.; Weih, S.; Zobus, Y.; Roth, M.
2017-06-01
The energy deposition of ions in dense plasmas is a key process in inertial confinement fusion that determines the α-particle heating expected to trigger a burn wave in the hydrogen pellet and resulting in high thermonuclear gain. However, measurements of ion stopping in plasmas are scarce and mostly restricted to high ion velocities where theory agrees with the data. Here, we report experimental data at low projectile velocities near the Bragg peak, where the stopping force reaches its maximum. This parameter range features the largest theoretical uncertainties and conclusive data are missing until today. The precision of our measurements, combined with a reliable knowledge of the plasma parameters, allows to disprove several standard models for the stopping power for beam velocities typically encountered in inertial fusion. On the other hand, our data support theories that include a detailed treatment of strong ion-electron collisions.
Experimental discrimination of ion stopping models near the Bragg peak in highly ionized matter
Cayzac, W.; Frank, A.; Ortner, A.; Bagnoud, V.; Basko, M. M.; Bedacht, S.; Bläser, C.; Blažević, A.; Busold, S.; Deppert, O.; Ding, J.; Ehret, M.; Fiala, P.; Frydrych, S.; Gericke, D. O.; Hallo, L.; Helfrich, J.; Jahn, D.; Kjartansson, E.; Knetsch, A.; Kraus, D.; Malka, G.; Neumann, N. W.; Pépitone, K.; Pepler, D.; Sander, S.; Schaumann, G.; Schlegel, T.; Schroeter, N.; Schumacher, D.; Seibert, M.; Tauschwitz, An.; Vorberger, J.; Wagner, F.; Weih, S.; Zobus, Y.; Roth, M.
2017-01-01
The energy deposition of ions in dense plasmas is a key process in inertial confinement fusion that determines the α-particle heating expected to trigger a burn wave in the hydrogen pellet and resulting in high thermonuclear gain. However, measurements of ion stopping in plasmas are scarce and mostly restricted to high ion velocities where theory agrees with the data. Here, we report experimental data at low projectile velocities near the Bragg peak, where the stopping force reaches its maximum. This parameter range features the largest theoretical uncertainties and conclusive data are missing until today. The precision of our measurements, combined with a reliable knowledge of the plasma parameters, allows to disprove several standard models for the stopping power for beam velocities typically encountered in inertial fusion. On the other hand, our data support theories that include a detailed treatment of strong ion-electron collisions. PMID:28569766
Auto-tuning for NMR probe using LabVIEW
NASA Astrophysics Data System (ADS)
Quen, Carmen; Pham, Stephanie; Bernal, Oscar
2014-03-01
Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program uses a simplified model of the NMR probe conditions near perfect tuning to mimic the tuning process and predict the position of the capacitor shafts needed to achieve the desirable impedance. The tuning capacitors of the probe are controlled by stepper motors through a LabVIEW/computer interface. Our program calculates the effective capacitance needed to tune the probe and provides controlling parameters to advance the motors in the right direction. The impedance reading of a network analyzer can be used to correct the model parameters in real time for feedback control.
Cosmology and accelerator tests of strongly interacting dark matter
Berlin, Asher; Blinov, Nikita; Gori, Stefania; ...
2018-03-23
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
NASA Technical Reports Server (NTRS)
Tolson, Robert H.; Lugo, Rafael A.; Baird, Darren T.; Cianciolo, Alicia D.; Bougher, Stephen W.; Zurek, Richard M.
2017-01-01
The Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft is a NASA orbiter designed to explore the Mars upper atmosphere, typically from 140 to 160 km altitude. In addition to the nominal science mission, MAVEN has performed several Deep Dip campaigns in which the orbit's closest point of approach, also called periapsis, was lowered to an altitude range of 115 to 135 km. MAVEN accelerometer data were used during mission operations to estimate atmospheric parameters such as density, scale height, along-track gradients, and wave structures. Density and scale height estimates were compared against those obtained from the Mars Global Reference Atmospheric Model and used to aid the MAVEN navigation team in planning maneuvers to raise and lower periapsis during Deep Dip operations. This paper describes the processes used to reconstruct atmosphere parameters from accelerometers data and presents the results of their comparison to model and navigation-derived values.
Cosmology and accelerator tests of strongly interacting dark matter
NASA Astrophysics Data System (ADS)
Berlin, Asher; Blinov, Nikita; Gori, Stefania; Schuster, Philip; Toro, Natalia
2018-03-01
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experiments such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. We also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.
Cosmology and accelerator tests of strongly interacting dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlin, Asher; Blinov, Nikita; Gori, Stefania
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.
Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S
2012-11-01
One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Javed, Hassan; Armstrong, Peter
2015-08-01
The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.
Nonlinear extension of a hemodynamic linear model for coherent hemodynamics spectroscopy.
Sassaroli, Angelo; Kainerstorfer, Jana M; Fantini, Sergio
2016-01-21
In this work, we are proposing an extension of a recent hemodynamic model (Fantini, 2014a), which was developed within the framework of a novel approach to the study of tissue hemodynamics, named coherent hemodynamics spectroscopy (CHS). The previous hemodynamic model, from a signal processing viewpoint, treats the tissue microvasculature as a linear time-invariant system, and considers changes of blood volume, capillary blood flow velocity and the rate of oxygen diffusion as inputs, and the changes of oxy-, deoxy-, and total hemoglobin concentrations (measured in near infrared spectroscopy) as outputs. The model has been used also as a forward solver in an inversion procedure to retrieve quantitative parameters that assess physiological and biological processes such as microcirculation, cerebral autoregulation, tissue metabolic rate of oxygen, and oxygen extraction fraction. Within the assumption of "small" capillary blood flow velocity oscillations the model showed that the capillary and venous compartments "respond" to this input as low pass filters, characterized by two distinct impulse response functions. In this work, we do not make the assumption of "small" perturbations of capillary blood flow velocity by solving without approximations the partial differential equation that governs the spatio-temporal behavior of hemoglobin saturation in capillary and venous blood. Preliminary comparison between the linear time-invariant model and the extended model (here identified as nonlinear model) are shown for the relevant parameters measured in CHS as a function of the oscillation frequency (CHS spectra). We have found that for capillary blood flow velocity oscillations with amplitudes up to 10% of the baseline value (which reflect typical scenarios in CHS), the discrepancies between CHS spectra obtained with the linear and nonlinear models are negligible. For larger oscillations (~50%) the linear and nonlinear models yield CHS spectra with differences within typical experimental errors, but further investigation is needed to assess the effect of these differences. Flow oscillations larger than 10-20% are not typically induced in CHS; therefore, the results presented in this work indicate that a linear hemodynamic model, combined with a method to elicit controlled hemodynamic oscillations (as done for CHS), is appropriate for the quantitative assessment of cerebral microcirculation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ultrasonic friction power during Al wire wedge-wedge bonding
NASA Astrophysics Data System (ADS)
Shah, A.; Gaul, H.; Schneider-Ramelow, M.; Reichl, H.; Mayer, M.; Zhou, Y.
2009-07-01
Al wire bonding, also called ultrasonic wedge-wedge bonding, is a microwelding process used extensively in the microelectronics industry for interconnections to integrated circuits. The bonding wire used is a 25μm diameter AlSi1 wire. A friction power model is used to derive the ultrasonic friction power during Al wire bonding. Auxiliary measurements include the current delivered to the ultrasonic transducer, the vibration amplitude of the bonding tool tip in free air, and the ultrasonic force acting on the bonding pad during the bond process. The ultrasonic force measurement is like a signature of the bond as it allows for a detailed insight into mechanisms during various phases of the process. It is measured using piezoresistive force microsensors integrated close to the Al bonding pad (Al-Al process) on a custom made test chip. A clear break-off in the force signal is observed, which is followed by a relatively constant force for a short duration. A large second harmonic content is observed, describing a nonsymmetric deviation of the signal wave form from the sinusoidal shape. This deviation might be due to the reduced geometrical symmetry of the wedge tool. For bonds made with typical process parameters, several characteristic values used in the friction power model are determined. The ultrasonic compliance of the bonding system is 2.66μm/N. A typical maximum value of the relative interfacial amplitude of ultrasonic friction is at least 222nm. The maximum interfacial friction power is at least 11.5mW, which is only about 4.8% of the total electrical power delivered to the ultrasonic generator.
Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira
2015-09-17
In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.
Some Memories are Odder than Others: Judgments of Episodic Oddity Violate Known Decision Rules
O’Connor, Akira R.; Guhl, Emily N.; Cox, Justin C.; Dobbins, Ian G.
2011-01-01
Current decision models of recognition memory are based almost entirely on one paradigm, single item old/new judgments accompanied by confidence ratings. This task results in receiver operating characteristics (ROCs) that are well fit by both signal-detection and dual-process models. Here we examine an entirely new recognition task, the judgment of episodic oddity, whereby participants select the mnemonically odd members of triplets (e.g., a new item hidden among two studied items). Using the only two known signal-detection rules of oddity judgment derived from the sensory perception literature, the unequal variance signal-detection model predicted that an old item among two new items would be easier to discover than a new item among two old items. In contrast, four separate empirical studies demonstrated the reverse pattern: triplets with two old items were the easiest to resolve. This finding was anticipated by the dual-process approach as the presence of two old items affords the greatest opportunity for recollection. Furthermore, a bootstrap-fed Monte Carlo procedure using two independent datasets demonstrated that the dual-process parameters typically observed during single item recognition correctly predict the current oddity findings, whereas unequal variance signal-detection parameters do not. Episodic oddity judgments represent a case where dual- and single-process predictions qualitatively diverge and the findings demonstrate that novelty is “odder” than familiarity. PMID:22833695
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrena, R.; Canovas, C.; Sanchez, A.
2006-07-01
A macroscopic non-steady state energy balance was developed and solved for a composting pile of source-selected organic fraction of municipal solid waste during the maturation stage (13,500 kg of compost). Simulated temperature profiles correlated well with temperature experimental data (ranging from 50 to 70 deg. C) obtained during the maturation process for more than 50 days at full scale. Thermal inertia effect usually found in composting plants and associated to the stockpiling of large composting masses could be predicted by means of this simplified energy balance, which takes into account terms of convective, conductive and radiation heat dissipation. Heat lossesmore » in a large composting mass are not significant due to the similar temperatures found at the surroundings and at the surface of the pile (ranging from 15 to 40 deg. C). In contrast, thermophilic temperature in the core of the pile was maintained during the whole maturation process. Heat generation was estimated with the static respiration index, a parameter that is typically used to monitor the biological activity and stability of composting processes. In this study, the static respiration index is presented as a parameter to estimate the metabolic heat that can be generated according to the biodegradable organic matter content of a compost sample, which can be useful in predicting the temperature of the composting process.« less
Laser-Assisted Cold-Sprayed Corrosion- and Wear-Resistant Coatings: A Review
NASA Astrophysics Data System (ADS)
Olakanmi, E. O.; Doyoyo, M.
2014-06-01
Laser-assisted cold spray (LACS) process will be increasingly employed for depositing coatings because of its unique advantages: solid-state deposition of dense, homogeneous, and pore-free coatings onto a range of substrates; and high build rate at reduced operating costs without the use of expensive heating and process inert gases. Depositing coatings with excellent performance indicators via LACS demands an accurate knowledge and control of processing and materials' variables. By varying the LACS process parameters and their interactions, the functional properties of coatings can be manipulated. Moreover, thermal effect due to laser irradiation and microstructural evolution complicate the interpretation of LACS mechanical deformation mechanism which is essential for elucidating its physical phenomena. In order to provide a basis for follow-on-research that leads to the development of high-productivity LACS processing of coatings, this review focuses on the latest developments in depositing corrosion- and wear-resistant coatings with the emphasis on the composition, structure, and mechanical and functional properties. Historical developments and fundamentals of LACS are addressed in an attempt to describe the physics behind the process. Typical technological applications of LACS coatings are also identified. The investigations of all process sequences, from laser irradiation of the powder-laden gas stream and the substrate, to the impingement of thermally softened particles on the deposition site, and subsequent further processes, are described. Existing gaps in the literature relating to LACS-dependent microstructural evolution, mechanical deformation mechanisms, correlation between functional properties and process parameters, processing challenges, and industrial applications have been identified in order to provide insights for further investigations and innovation in LACS deposition of wear- and corrosion-resistant coatings.
Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal
2017-12-01
Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Mathematical Model of Solid Food Pasteurization by Ohmic Heating: Influence of Process Parameters
2014-01-01
Pasteurization of a solid food undergoing ohmic heating has been analysed by means of a mathematical model, involving the simultaneous solution of Laplace's equation, which describes the distribution of electrical potential within a food, the heat transfer equation, using a source term involving the displacement of electrical potential, the kinetics of inactivation of microorganisms likely to be contaminating the product. In the model, thermophysical and electrical properties as function of temperature are used. Previous works have shown the occurrence of heat loss from food products to the external environment during ohmic heating. The current model predicts that, when temperature gradients are established in the proximity of the outer ohmic cell surface, more cold areas are present at junctions of electrodes with lateral sample surface. For these reasons, colder external shells are the critical areas to be monitored, instead of internal points (typically geometrical center) as in classical pure conductive heat transfer. Analysis is carried out in order to understand the influence of pasteurisation process parameters on this temperature distribution. A successful model helps to improve understanding of these processing phenomenon, which in turn will help to reduce the magnitude of the temperature differential within the product and ultimately provide a more uniformly pasteurized product. PMID:24574874
Mathematical model of solid food pasteurization by ohmic heating: influence of process parameters.
Marra, Francesco
2014-01-01
Pasteurization of a solid food undergoing ohmic heating has been analysed by means of a mathematical model, involving the simultaneous solution of Laplace's equation, which describes the distribution of electrical potential within a food, the heat transfer equation, using a source term involving the displacement of electrical potential, the kinetics of inactivation of microorganisms likely to be contaminating the product. In the model, thermophysical and electrical properties as function of temperature are used. Previous works have shown the occurrence of heat loss from food products to the external environment during ohmic heating. The current model predicts that, when temperature gradients are established in the proximity of the outer ohmic cell surface, more cold areas are present at junctions of electrodes with lateral sample surface. For these reasons, colder external shells are the critical areas to be monitored, instead of internal points (typically geometrical center) as in classical pure conductive heat transfer. Analysis is carried out in order to understand the influence of pasteurisation process parameters on this temperature distribution. A successful model helps to improve understanding of these processing phenomenon, which in turn will help to reduce the magnitude of the temperature differential within the product and ultimately provide a more uniformly pasteurized product.
Fast estimation of space-robots inertia parameters: A modular mathematical formulation
NASA Astrophysics Data System (ADS)
Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher
2016-10-01
This work aims to propose a new technique that considerably helps enhance time and precision needed to identify ;Inertia Parameters (IPs); of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), ;active space-debris removal; or ;automated in-orbit assemblies;. In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new ;modular formulation; has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a ;Modular Set; of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for ;simultaneous estimation processes; using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.
How much expert knowledge is it worth to put in conceptual hydrological models?
NASA Astrophysics Data System (ADS)
Antonetti, Manuel; Zappa, Massimiliano
2017-04-01
Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Evaluation of an artificial intelligence guided inverse planning system: clinical case study.
Yan, Hui; Yin, Fang-Fang; Willett, Christopher
2007-04-01
An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case-specific inference rules are essential to fully automate the inverse planning process.
Rotor Wake Vortex Definition: Initial Evaluation of 3-C PIV Results of the Hart-II Study
NASA Technical Reports Server (NTRS)
Burley, Casey L.; Brooks, Thomas F.; vanderWall, Berend; Richard, Hughes; Raffel, Markus; Beaumier, Philippe; Delrieux, Yves; Lim, Joon W.; Yu, Yung H.; Tung, Chee
2002-01-01
An initial evaluation is made of extensive three-component (3C) particle image velocimetry (PIV) measurements within the wake across a rotor disk plane. The model is a 40 percent scale BO-105 helicopter main rotor in forward flight simulation. This study is part of the HART II test program conducted in the German-Dutch Wind Tunnel (DNW). Included are wake vortex field measurements over the advancing and retreating sides of the rotor operating at a typical descent landing condition important for impulsive blade-vortex interaction (BVI) noise. Also included are advancing side results for rotor angle variations from climb to steep descent. Using detailed PIV vector maps of the vortex fields, methods of extracting key vortex parameters are examined and a new method was developed and evaluated. An objective processing method, involving a center-of-vorticity criterion and a vorticity 'disk' integration, was used to determine vortex core size, strength, core velocity distribution characteristics, and unsteadiness. These parameters are mapped over the rotor disk and offer unique physical insight for these parameters of importance for rotor noise and vibration prediction.
Investigating the Metallicity–Mixing-length Relation
NASA Astrophysics Data System (ADS)
Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.
2018-05-01
Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.
Digital simulation of a communication link for Pioneer Saturn Uranus atmospheric entry probe, part 1
NASA Technical Reports Server (NTRS)
Hinrichs, C. A.
1975-01-01
A digital simulation study is presented for a candidate modulator/demodulator design in an atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the conditions of an outer planet atmospheric probe. The simulation results indicate that the mean channel error rate with and without scintillation are similar to theoretical characterizations of the link. The simulation gives information for calculating other channel statistics and generates a quantized symbol stream on magnetic tape from which error correction decoding is analyzed. Results from the magnetic tape data analyses are also included. The receiver and bit synchronizer are modeled in the simulation at the level of hardware component parameters rather than at the loop equation level and individual hardware parameters are identified. The atmospheric scintillation amplitude and phase are modeled independently. Normal and log normal amplitude processes are studied. In each case the scintillations are low pass filtered. The receiver performance is given for a range of signal to noise ratios with and without the effects of scintillation. The performance is reviewed for critical reciever parameter variations.
Müller, Dirk K; Pampel, André; Möller, Harald E
2013-05-01
Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.
Trame, MN; Lesko, LJ
2015-01-01
A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289
Cometary spliting - a source for the Jupiter family?
NASA Astrophysics Data System (ADS)
Pittich, E. M.; Rickman, H.
1994-01-01
The quest for the origin of the Jupiter family of comets includes investigating the possibility that a large fraction of this population originates from past splitting events. In particular, one suggested scenario, albeit less attractive on physical grounds, maintains that a giant comet breakup is a major source of short-period comets. By simulating such events and integrating the motions of the fictitious fragments in an accurate solar system model for the typical lifetime of Jupiter family comets, it is possible to check whether the outcome may or may not be compatible with the observed orbital distribution. In this paper we present such integrations for a few typical progenitor orbits and analyze the ensuing thermalization process with particular attention to the Tisserand parameters. It is found that the sets of fragments lose their memory of a common origin very rapidly so that, in general terms, it is difficult to use the random appearance of the observed orbital distribution as evidence against the giant comet splitting hypothesis.
Grigos, Maria I.; Kolenda, Nicole
2010-01-01
Jaw movement patterns were examined longitudinally in a 3-year-old male with childhood apraxia of speech (CAS) and compared with a typically developing control group. The child with CAS was followed for 8 months, until he began accurately and consistently producing the bilabial phonemes /p/, /b/, and /m/. A movement tracking system was used to study jaw duration, displacement, velocity, and stability. A transcription analysis determined the percentage of phoneme errors and consistency. Results showed phoneme-specific changes which included increases in jaw velocity and stability over time, as well as decreases in duration. Kinematic parameters became more similar to patterns seen in the controls during final sessions where tokens were produced most accurately and consistently. Closing velocity and stability, however, were the only measures to fall within a 95% confidence interval established for the controls across all three target phonemes. These findings suggest that motor processes may differ between children with CAS and their typically developing peers. PMID:20030551
Blood-Derived RNA- and microRNA-Hydrolyzing IgG Antibodies in Schizophrenia Patients.
Ermakov, E A; Ivanova, S A; Buneva, V N; Nevinsky, G A
2018-05-01
Abzymes with various catalytic activities are the earliest statistically significant markers of existing and developing autoimmune diseases (AIDs). Currently, schizophrenia (SCZD) is not considered to be a typical AID. It was demonstrated recently that antibodies from SCZD patients efficiently hydrolyze DNA and myelin basic protein. Here, we showed for the first time that autoantibodies from 35 SCZD patients efficiently hydrolyze RNA (cCMP > poly(C) > poly(A) > yeast RNA) and analyzed site-specific hydrolysis of microRNAs involved in the regulation of several genes in SCZD (miR-137, miR-9-5p, miR-219-2-3p, and miR-219a-5p). All four microRNAs were cleaved by IgG preparations (n = 21) from SCZD patients in a site-specific manner. The RNase activity of the abzymes correlated with SCZD clinical parameters. The data obtained showed that SCZD patients might display signs of typical autoimmune processes associated with impaired functioning of microRNAs resulting from their hydrolysis by the abzymes.
Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng
2015-01-01
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158
Correlated receptor transport processes buffer single-cell heterogeneity
Kallenberger, Stefan M.; Unger, Anne L.; Legewie, Stefan; Lymperopoulos, Konstantinos; Eils, Roland
2017-01-01
Cells typically vary in their response to extracellular ligands. Receptor transport processes modulate ligand-receptor induced signal transduction and impact the variability in cellular responses. Here, we quantitatively characterized cellular variability in erythropoietin receptor (EpoR) trafficking at the single-cell level based on live-cell imaging and mathematical modeling. Using ensembles of single-cell mathematical models reduced parameter uncertainties and showed that rapid EpoR turnover, transport of internalized EpoR back to the plasma membrane, and degradation of Epo-EpoR complexes were essential for receptor trafficking. EpoR trafficking dynamics in adherent H838 lung cancer cells closely resembled the dynamics previously characterized by mathematical modeling in suspension cells, indicating that dynamic properties of the EpoR system are widely conserved. Receptor transport processes differed by one order of magnitude between individual cells. However, the concentration of activated Epo-EpoR complexes was less variable due to the correlated kinetics of opposing transport processes acting as a buffering system. PMID:28945754
The fairytale of the GSSG/GSH redox potential.
Flohé, Leopold
2013-05-01
The term GSSG/GSH redox potential is frequently used to explain redox regulation and other biological processes. The relevance of the GSSG/GSH redox potential as driving force of biological processes is critically discussed. It is recalled that the concentration ratio of GSSG and GSH reflects little else than a steady state, which overwhelmingly results from fast enzymatic processes utilizing, degrading or regenerating GSH. A biological GSSG/GSH redox potential, as calculated by the Nernst equation, is a deduced electrochemical parameter based on direct measurements of GSH and GSSG that are often complicated by poorly substantiated assumptions. It is considered irrelevant to the steering of any biological process. GSH-utilizing enzymes depend on the concentration of GSH, not on [GSH](2), as is predicted by the Nernst equation, and are typically not affected by GSSG. Regulatory processes involving oxidants and GSH are considered to make use of mechanistic principles known for thiol peroxidases which catalyze the oxidation of hydroperoxides by GSH by means of an enzyme substitution mechanism involving only bimolecular reaction steps. The negligibly small rate constants of related spontaneous reactions as compared with enzyme-catalyzed ones underscore the superiority of kinetic parameters over electrochemical or thermodynamic ones for an in-depth understanding of GSH-dependent biological phenomena. At best, the GSSG/GSH potential might be useful as an analytical tool to disclose disturbances in redox metabolism. This article is part of a Special Issue entitled Cellular Functions of Glutathione. Copyright © 2012 Elsevier B.V. All rights reserved.
Shao, Dongyan; Atungulu, Griffiths G; Pan, Zhongli; Yue, Tianli; Zhang, Ang; Li, Xuan
2012-08-01
Value of tomato seed has not been fully recognized. The objectives of this research were to establish suitable processing conditions for extracting oil from tomato seed by using solvent, determine the impact of processing conditions on yield and antioxidant activity of extracted oil, and elucidate kinetics of the oil extraction process. Four processing parameters, including time, temperature, solvent-to-solid ratio and particle size were studied. A second order model was established to describe the oil extraction process. Based on the results, increasing temperature, solvent-to-solid ratio, and extraction time increased oil yield. In contrast, larger particle size reduced the oil yield. The recommended oil extraction conditions were 8 min of extraction time at temperature of 25 °C, solvent-to-solids ratio of 5/1 (v/w) and particle size of 0.38 mm, which gave oil yield of 20.32% with recovery rate of 78.56%. The DPPH scavenging activity of extracted oil was not significantly affected by the extraction parameters. The inhibitory concentration (IC(50) ) of tomato seed oil was 8.67 mg/mL which was notably low compared to most vegetable oils. A 2nd order model successfully described the kinetics of tomato oil extraction process and parameters of extraction kinetics including initial extraction rate (h), equilibrium concentration of oil (C(s) ), and the extraction rate constant (k) could be precisely predicted with R(2) of at least 0.957. The study revealed that tomato seed which is typically treated as a low value byproduct of tomato processing has great potential in producing oil with high antioxidant capability. The impact of processing conditions including time, temperature, solvent-to-solid ratio and particle size on yield, and antioxidant activity of extracted tomato seed oil are reported. Optimal conditions and models which describe the extraction process are recommended. The information is vital for determining the extraction processing conditions for industrial production of high quality tomato seed oil. Journal of Food Science © 2012 Institute of Food Technologists® No claim to original US government works.
Laboratory R-value vs. in-situ NDT methods.
DOT National Transportation Integrated Search
2006-05-01
The New Mexico Department of Transportation (NMDOT) uses the Resistance R-Value as a quantifying parameter in subgrade and base course design. The parameter represents soil strength and stiffness and ranges from 1 to 80, 80 being typical of the highe...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewsuk, K.G.; Cochran, R.J.; Blackwell, B.F.
The properties and performance of a ceramic component is determined by a combination of the materials from which it was fabricated and how it was processed. Most ceramic components are manufactured by dry pressing a powder/binder system in which the organic binder provides formability and green compact strength. A key step in this manufacturing process is the removal of the binder from the powder compact after pressing. The organic binder is typically removed by a thermal decomposition process in which heating rate, temperature, and time are the key process parameters. Empirical approaches are generally used to design the burnout time-temperaturemore » cycle, often resulting in excessive processing times and energy usage, and higher overall manufacturing costs. Ideally, binder burnout should be completed as quickly as possible without damaging the compact, while using a minimum of energy. Process and computational modeling offer one means to achieve this end. The objective of this study is to develop an experimentally validated computer model that can be used to better understand, control, and optimize binder burnout from green ceramic compacts.« less
Spectroscopic analysis technique for arc-welding process control
NASA Astrophysics Data System (ADS)
Mirapeix, Jesús; Cobo, Adolfo; Conde, Olga; Quintela, María Ángeles; López-Higuera, José-Miguel
2005-09-01
The spectroscopic analysis of the light emitted by thermal plasmas has found many applications, from chemical analysis to monitoring and control of industrial processes. Particularly, it has been demonstrated that the analysis of the thermal plasma generated during arc or laser welding can supply information about the process and, thus, about the quality of the weld. In some critical applications (e.g. the aerospace sector), an early, real-time detection of defects in the weld seam (oxidation, porosity, lack of penetration, ...) is highly desirable as it can reduce expensive non-destructive testing (NDT). Among others techniques, full spectroscopic analysis of the plasma emission is known to offer rich information about the process itself, but it is also very demanding in terms of real-time implementations. In this paper, we proposed a technique for the analysis of the plasma emission spectrum that is able to detect, in real-time, changes in the process parameters that could lead to the formation of defects in the weld seam. It is based on the estimation of the electronic temperature of the plasma through the analysis of the emission peaks from multiple atomic species. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, we employ the LPO (Linear Phase Operator) sub-pixel algorithm to accurately estimate the central wavelength of the peaks (allowing an automatic identification of each atomic species) and cubic-spline interpolation of the noisy data to obtain the intensity and width of the peaks. Experimental tests on TIG-welding using fiber-optic capture of light and a low-cost CCD-based spectrometer, show that some typical defects can be easily detected and identified with this technique, whose typical processing time for multiple peak analysis is less than 20msec. running in a conventional PC.
NASA Astrophysics Data System (ADS)
Granieri, D.; Avino, R.; Chiodini, G.
2010-01-01
Carbon dioxide flux from the soil is regularly monitored in selected areas of Vesuvio and Solfatara (Campi Flegrei, Pozzuoli) with the twofold aim of i) monitoring spatial and temporal variations of the degassing process and ii) investigating if the surface phenomena could provide information about the processes occurring at depth. At present, the surveyed areas include 15 fixed points around the rim of Vesuvio and 71 fixed points in the floor of Solfatara crater. Soil CO2 flux has been measured since 1998, at least once a month, in both areas. In addition, two automatic permanent stations, located at Vesuvio and Solfatara, measure the CO2 flux and some environmental parameters that can potentially influence the CO2 diffuse degassing. Series acquired by continuous stations are characterized by an annual periodicity that is related to the typical periodicities of some meteorological parameters. Conversely, series of CO2 flux data arising from periodic measurements over the arrays of Vesuvio and Solfatara are less dependent on external factors such as meteorological parameters, local soil properties (porosity, hydraulic conductivity) and topographic effects (high or low ground). Therefore we argue that the long-term trend of this signal contains the “best” possible representation of the endogenous signal related to the upflow of deep hydrothermal fluids.
NASA Astrophysics Data System (ADS)
Borie, B.; Kehlberger, A.; Wahrhusen, J.; Grimm, H.; Kläui, M.
2017-08-01
We study the key domain-wall properties in segmented nanowire loop-based structures used in domain-wall-based sensors. The two reasons for device failure, namely, distribution of the domain-wall propagation field (depinning) and the nucleation field are determined with magneto-optical Kerr effect and giant-magnetoresistance (GMR) measurements for thousands of elements to obtain significant statistics. Single layers of Ni81 Fe19 , a complete GMR stack with Co90 Fe10 /Ni81Fe19 as a free layer, and a single layer of Co90 Fe10 are deposited and industrially patterned to determine the influence of the shape anisotropy, the magnetocrystalline anisotropy, and the fabrication processes. We show that the propagation field is influenced only slightly by the geometry but significantly by material parameters. Simulations for a realistic wire shape yield a curling-mode type of magnetization configuration close to the nucleation field. Nonetheless, we find that the domain-wall nucleation fields can be described by a typical Stoner-Wohlfarth model related to the measured geometrical parameters of the wires and fitted by considering the process parameters. The GMR effect is subsequently measured in a substantial number of devices (3000) in order to accurately gauge the variation between devices. This measurement scheme reveals a corrected upper limit to the nucleation fields of the sensors that can be exploited for fast characterization of the working elements.
Investigation of Advanced Processed Single-Crystal Turbine Blade Alloys
NASA Technical Reports Server (NTRS)
Peters, B. J.; Biondo, C. M.; DeLuca, D. P.
1995-01-01
This investigation studied the influence of thermal processing and microstructure on the mechanical properties of the single-crystal, nickel-based superalloys PWA 1482 and PWA 1484. The objective of the program was to develop an improved single-crystal turbine blade alloy that is specifically tailored for use in hydrogen fueled rocket engine turbopumps. High-gradient casting, hot isostatic pressing (HIP), and alternate heat treatment (HT) processing parameters were developed to produce pore-free, eutectic-free microstructures with different (gamma)' precipitate morphologies. Test materials were cast in high thermal gradient solidification (greater than 30 C/cm (137 F/in.)) casting furnaces for reduced dendrite arm spacing, improved chemical homogeneity, and reduced interdendritic pore size. The HIP processing was conducted in 40 cm (15.7 in.) diameter production furnaces using a set of parameters selected from a trial matrix study. Metallography was conducted on test samples taken from each respective trial run to characterize the as-HIP microstructure. Post-HIP alternate HT processes were developed for each of the two alloys. The goal of the alternate HT processing was to fully solution the eutectic gamma/(gamma)' phase islands and to develop a series of modified (gamma)' morphologies for subsequent characterization testing. This was accomplished by slow cooling through the (gamma)' solvus at controlled rates to precipitate volume fractions of large (gamma)'. Post-solution alternate HT parameters were established for each alloy providing additional volume fractions of finer precipitates. Screening tests included tensile, high-cycle fatigue (HCF), smooth and notched low-cycle fatigue (LCF), creep, and fatigue crack growth evaluations performed in air and high pressure (34.5 MPa (5 ksi)) hydrogen at room and elevated temperature. Under the most severe embrittling conditions (HCF and smooth and notched LCF in 34.5 MPa (5 ksi) hydrogen at 20 C (68 F), screening test results showed increases in fatigue life typically on the order of 1OX, when compared to the current Space Shuttle Main Engine (SSME) Alternate Turbopump (AT) blade alloy (PWA 1480).
Results of scatterometer systems analysis for NASA/MSC Earth Observation Sensor Evaluation Program.
NASA Technical Reports Server (NTRS)
Krishen, K.; Vlahos, N.; Brandt, O.; Graybeal, G.
1971-01-01
Radar scatterometers have applications in the NASA/MSC Earth Observation Aircraft Program. Over a period of several years, several missions have been flown over both land and ocean. In this paper a system evaluation of the NASA/MSC 13.3-GHz Scatterometer System is presented. The effects of phase error between the Scatterometer channels, antenna pattern deviations, aircraft attitude deviations, environmental changes, and other related factors such as processing errors, system repeatability, and propeller modulation, were established. Furthermore, the reduction in system errors and calibration improvement was investigated by taking into account these parameter deviations. Typical scatterometer data samples are presented.
Adaptive hybrid optimal quantum control for imprecisely characterized systems.
Egger, D J; Wilhelm, F K
2014-06-20
Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.
Förster, Arno; Stock, Jürgen; Montanari, Simone; Lepsa, Mihail Ion; Lüth, Hans
2006-01-01
GaAs-based Gunn diodes with graded AlGaAs hot electron injector heterostructures have been developed under the special needs in automotive applications. The fabrication of the Gunn diode chips was based on total substrate removal and processing of integrated Au heat sinks. Especially, the thermal and RF behavior of the diodes have been analyzed by DC, impedance and S-parameter measurements. The electrical investigations have revealed the functionality of the hot electron injector. An optimized layer structure could fulfill the requirements in adaptive cruise control (ACC) systems at 77 GHz with typical output power between 50 and 90 mW.
An improved car-following model accounting for the preceding car's taillight
NASA Astrophysics Data System (ADS)
Zhang, Jian; Tang, Tie-Qiao; Yu, Shao-Wei
2018-02-01
During the deceleration process, the preceding car's taillight may have influences on its following car's driving behavior. In this paper, we propose an extended car-following model with consideration of the preceding car's taillight. Two typical situations are used to simulate each car's movement and study the effects of the preceding car's taillight on the driving behavior. Meanwhile, sensitivity analysis of the model parameter is in detail discussed. The numerical results show that the proposed model can improve the stability of traffic flow and the traffic safety can be enhanced without a decrease of efficiency especially when cars pass through a signalized intersection.
A Survey on Wireless Body Area Networks for eHealthcare Systems in Residential Environments
Ghamari, Mohammad; Janko, Balazs; Sherratt, R. Simon; Harwin, William; Piechockic, Robert; Soltanpur, Cinna
2016-01-01
Current progress in wearable and implanted health monitoring technologies has strong potential to alter the future of healthcare services by enabling ubiquitous monitoring of patients. A typical health monitoring system consists of a network of wearable or implanted sensors that constantly monitor physiological parameters. Collected data are relayed using existing wireless communication protocols to a base station for additional processing. This article provides researchers with information to compare the existing low-power communication technologies that can potentially support the rapid development and deployment of WBAN systems, and mainly focuses on remote monitoring of elderly or chronically ill patients in residential environments. PMID:27338377
A Survey on Wireless Body Area Networks for eHealthcare Systems in Residential Environments.
Ghamari, Mohammad; Janko, Balazs; Sherratt, R Simon; Harwin, William; Piechockic, Robert; Soltanpur, Cinna
2016-06-07
Current progress in wearable and implanted health monitoring technologies has strong potential to alter the future of healthcare services by enabling ubiquitous monitoring of patients. A typical health monitoring system consists of a network of wearable or implanted sensors that constantly monitor physiological parameters. Collected data are relayed using existing wireless communication protocols to a base station for additional processing. This article provides researchers with information to compare the existing low-power communication technologies that can potentially support the rapid development and deployment of WBAN systems, and mainly focuses on remote monitoring of elderly or chronically ill patients in residential environments.
A microspectrometer based on subwavelength metal nanohole array
NASA Astrophysics Data System (ADS)
Cui, Jun; Xia, Liangping; Yang, Zheng; Yin, Lu; Zheng, Guoxing; Yin, Shaoyun; Du, Chunlei
2014-11-01
Catering to the active demand of the miniaturization of spectrometers, a simple microspectrometer with small size and light weight is presented in this paper. The presented microspectrometer is a typical filter-based spectrometer using the extraordinary optical transmission property of subwavelength metal hole array structure. Different subwavelength metal nanohole arrays are designed to work as different filter units obtained by changing the lattice parameters. By processing the filter spectra with a unique algorithm based on sparse representation, the proposed spectrometer is demonstrated to have the capability of high spectral resolution and accuracy. Benefit for the thin filmed feature, the microspectrometer is expected to find its application in integrated optical systems.
Elliptical orbit performance computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program which generates and plots elliptical orbit performance capability of space boosters for presentation purposes is described. Orbital performance capability of space boosters is typically presented as payload weight as a function of perigee and apogee altitudes. The parameters are derived from a parametric computer simulation of the booster flight which yields the payload weight as a function of velocity and altitude at insertion. The process of converting from velocity and altitude to apogee and perigee altitude and plotting the results as a function of payload weight is mechanized with the ELOPE program. The program theory, user instruction, input/output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2018-01-01
Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. PMID:27097901
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2016-06-01
A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Opel, Cary F; Li, Jincai; Amanullah, Ashraf
2010-01-01
Dielectric spectroscopy was used to analyze typical batch and fed-batch CHO cell culture processes. Three methods of analysis (linear modeling, Cole-Cole modeling, and partial least squares regression), were used to correlate the spectroscopic data with routine biomass measurements [viable packed cell volume, viable cell concentration (VCC), cell size, and oxygen uptake rate (OUR)]. All three models predicted offline biomass measurements accurately during the growth phase of the cultures. However, during the stationary and decline phases of the cultures, the models decreased in accuracy to varying degrees. Offline cell radius measurements were unsuccessfully used to correct for the deviations from the linear model, indicating that physiological changes affecting permittivity were occurring. The beta-dispersion was analyzed using the Cole-Cole distribution parameters Deltaepsilon (magnitude of the permittivity drop), f(c) (critical frequency), and alpha (Cole-Cole parameter). Furthermore, the dielectric parameters static internal conductivity (sigma(i)) and membrane capacitance per area (C(m)) were calculated for the cultures. Finally, the relationship between permittivity, OUR, and VCC was examined, demonstrating how the definition of viability is critical when analyzing biomass online. The results indicate that the common assumptions of constant size and dielectric properties used in dielectric analysis are not always valid during later phases of cell culture processes. The findings also demonstrate that dielectric spectroscopy, while not a substitute for VCC, is a complementary measurement of viable biomass, providing useful auxiliary information about the physiological state of a culture. (c) 2010 American Institute of Chemical Engineers
Kurilić, Sanja Mrazovac; Ulniković, Vladanka Presburger; Marić, Nenad; Vasiljević, Milenko
2015-11-01
This paper provides insight into the quality of groundwater used for public water supply on the territory of Temerin municipality (Vojvodina, Serbia). The following parameters were measured: color, turbidity, pH, KMnO4 consumption, total dissolved solids (TDS), EC, NH4+, Cl-, NO2-, NO3-, Fe, Mn, As, Ca2+, Mg2+, SO4(2-), HCO3-, K+, and Na+. The correlations and ratios among parameters that define the chemical composition were determined aiming to identify main processes that control the formation of the chemical composition of the analyzed waters. Groundwater from three analyzed sources is Na-HCO3 type. Elevated organic matter content, ammonium ion content, and arsene content are characteristic for these waters. The importance of organic matter decay is assumed by positive correlation between organic matter content and TDS, and HCO3- content. There is no evidence that groundwater chemistry is determined by the depth of captured aquifer interval. The main natural processes that control the chemistry of all analyzed water are cation exchange and feldspar weathering. The dominant cause of As concentration in groundwater is the use of mineral fertilizers and of KMnO4 in urban area. The concentration of As and KMnO4 in the observed sources is inversely proportional to the distance from agricultural land and urban area. 2D model of distribution of As and KMnO4 is done, and it is applicable in detecting sources of pollution. By using this model, we can quantify the impact of certain pollutants on unfavorable content of some parameters in groundwater.
Comparison of different filter methods for data assimilation in the unsaturated zone
NASA Astrophysics Data System (ADS)
Lange, Natascha; Berkhahn, Simon; Erdal, Daniel; Neuweiler, Insa
2016-04-01
The unsaturated zone is an important compartment, which plays a role for the division of terrestrial water fluxes into surface runoff, groundwater recharge and evapotranspiration. For data assimilation in coupled systems it is therefore important to have a good representation of the unsaturated zone in the model. Flow processes in the unsaturated zone have all the typical features of flow in porous media: Processes can have long memory and as observations are scarce, hydraulic model parameters cannot be determined easily. However, they are important for the quality of model predictions. On top of that, the established flow models are highly non-linear. For these reasons, the use of the popular Ensemble Kalman filter as a data assimilation method to estimate state and parameters in unsaturated zone models could be questioned. With respect to the long process memory in the subsurface, it has been suggested that iterative filters and smoothers may be more suitable for parameter estimation in unsaturated media. We test the performance of different iterative filters and smoothers for data assimilation with a focus on parameter updates in the unsaturated zone. In particular we compare the Iterative Ensemble Kalman Filter and Smoother as introduced by Bocquet and Sakov (2013) as well as the Confirming Ensemble Kalman Filter and the modified Restart Ensemble Kalman Filter proposed by Song et al. (2014) to the original Ensemble Kalman Filter (Evensen, 2009). This is done with simple test cases generated numerically. We consider also test examples with layering structure, as a layering structure is often found in natural soils. We assume that observations are water content, obtained from TDR probes or other observation methods sampling relatively small volumes. Particularly in larger data assimilation frameworks, a reasonable balance between computational effort and quality of results has to be found. Therefore, we compare computational costs of the different methods as well as the quality of open loop model predictions and the estimated parameters. Bocquet, M. and P. Sakov, 2013: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlinear Processes in Geophysics 20(5): 803-818. Evensen, G., 2009: Data assimilation: The ensemble Kalman filter. Springer Science & Business Media. Song, X.H., L.S. Shi, M. Ye, J.Z. Yang and I.M. Navon, 2014: Numerical comparison of iterative ensemble Kalman filters for unsaturated flow inverse modeling. Vadose Zone Journal 13(2), 10.2136/vzj2013.05.0083.
Micro-mechanics of hydro-mechanical coupled processes during hydraulic fracturing in sandstone
NASA Astrophysics Data System (ADS)
Caulk, R.; Tomac, I.
2017-12-01
This contribution presents micro-mechanical study of hydraulic fracture initiation and propagation in sandstone. The Discrete Element Method (DEM) Yade software is used as a tool to model fully coupled hydro-mechanical behavior of the saturated sandstone under pressures typical for deep geo-reservoirs. Heterogeneity of sandstone strength tensile and shear parameters are introduced using statistical representation of cathodoluminiscence (CL) sandstone rock images. Weibull distribution of statistical parameter values was determined as a best match of the CL scans of sandstone grains and cement between grains. Results of hydraulic fracturing stimulation from the well bore indicate significant difference between models with the bond strengths informed from CL scans and uniform homogeneous representation of sandstone parameters. Micro-mechanical insight reveals formed hydraulic fracture typical for mode I or tensile cracking in both cases. However, the shear micro-cracks are abundant in the CL informed model while they are absent in the standard model with uniform strength distribution. Most of the mode II cracks, or shear micro-cracks, are not part of the main hydraulic fracture and occur in the near-tip and near-fracture areas. The position and occurrence of the shear micro-cracks is characterized as secondary effect which dissipates the hydraulic fracturing energy. Additionally, the shear micro-crack locations qualitatively resemble acoustic emission cloud of shear cracks frequently observed in hydraulic fracturing, and sometimes interpreted as re-activation of existing fractures. Clearly, our model does not contain pre-existing cracks and has continuous nature prior to fracturing. This observation is novel and interesting and is quantified in the paper. The shear particle contact forces field reveals significant relaxation compared to the model with uniform strength distribution.
A simple strategy for varying the restart parameter in GMRES(m)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Jessup, E R; Kolev, T V
2007-10-02
When solving a system of linear equations with the restarted GMRES method, a fixed restart parameter is typically chosen. We present numerical experiments that demonstrate the beneficial effects of changing the value of the restart parameter in each restart cycle on the total time to solution. We propose a simple strategy for varying the restart parameter and provide some heuristic explanations for its effectiveness based on analysis of the symmetric case.
Collection and analysis of specific ELINT Signal Parameters
NASA Astrophysics Data System (ADS)
Wilson, Lonnie A.
1985-12-01
This report was a followup to, Collection and Analysis of Specific ELINT Signal Parameters, DTIC A166507, 23 June 1985. The programs and hardware assembled for the above mentioned report were used to analyze two types of radar, the PPS-6 and the HOOD radars. The typical ELINT parameters of frequency, pulse width, and pulse repetition rate were collected and analyzed.
Collection and analysis of specific ELINT Signal Parameters
NASA Technical Reports Server (NTRS)
Wilson, Lonnie A.
1985-01-01
This report was a followup to, Collection and Analysis of Specific ELINT Signal Parameters, DTIC A166507, 23 June 1985. The programs and hardware assembled for the above mentioned report were used to analyze two types of radar, the PPS-6 and the HOOD radars. The typical ELINT parameters of frequency, pulse width, and pulse repetition rate were collected and analyzed.
Adam Wolf; Kanat Akshalov; Nicanor Saliendra; Douglas A. Johnson; Emilio A. Laca
2006-01-01
Canopy fluxes of CO2 and energy can be modeled with high fidelity using a small number of environmental variables and ecosystem parameters. Although these ecosystem parameters are critically important for modeling canopy fluxes, they typically are not measured with the same intensity as ecosystem fluxes. We developed an algorithm to estimate leaf...
Properties of M components from currents measured at triggered lightning channel base
NASA Astrophysics Data System (ADS)
Thottappillil, Rajeev; Goldberg, Jon D.; Rakov, Vladimir A.; Uman, Martin A.; Fisher, Richard J.; Schnetzer, George H.
1995-12-01
Channel base currents from triggered lightning were measured at the NASA Kennedy Space Center, Florida, during summer 1990 and at Fort McClellan, Alabama, during summer 1991. An analysis of the return stroke data and overall continuing current data has been published by Fisher et al. [1993]. Here an analysis is given of the impulsive processes, called M components, that occur during the continuing current following return strokes. The 14 flashes analyzed contain 37 leader-return stroke sequences and 158 M components, both processes lowering negative charge from cloud to ground. Statistics are presented for the following M current pulse parameters: magnitude, rise time, duration, half-peak width, preceding continuing current level, M interval, elapsed time since the return stroke, and charge transferred by the M current pulse. A typical M component in triggered lightning is characterized by a more or less symmetrical current pulse having an amplitude of 100-200 A (2 orders of magnitude lower than that for a typical return stroke [Fisher et al., 1993]), a 10-90% rise time of 300-500 μs (3 orders of magnitude larger than that for a typical return stroke [Fisher et al., 1993]), and a charge transfer to ground of the order of 0.1 to 0.2 C (1 order of magnitude smaller than that for a typical subsequent return stroke pulse [Berger et al., 1975]). About one third of M components transferred charge greater than the minimum charge reported by Berger et al. [1975] for subsequent leader-return stroke sequences. No correlation was found between either the M charge or the magnitude of the M component current (the two are moderately correlated) and any other parameter considered. M current pulses occurring soon after the return stroke tend to have shorter rise times, shorter durations, and shorter M intervals than those which occur later. M current pulses were observed to be superimposed on continuing currents greater than 30 A or so, with one exception out of 140 cases, wherein the continuing current level was measured to be about 20 A. The first M component virtually always (one exception out of 34 cases) occurred within 4 ms of the return stroke. This relatively short separation time between return stroke and the first M component, coupled with the observation of Fisher et al. [1993] that continuing currents lasting longer than 10 ms never occur without M current pulses, implies that the M component is a necessary feature of the continuing current mode of charge transfer to ground.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences.
Rivolo, Simone; Asrress, Kaleab N; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø; Grøndal, Anne K; Hønge, Jesper L; Kim, Won Y; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P; Lee, Jack
2014-09-01
Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky-Golay filter, to reduce the high frequency acquisition noise. The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%).
Lehmann, Sara; Gajek, Grzegorz; Chmiel, Stanisław; Polkowska, Żaneta
2016-12-01
The chemism of the glaciers is strongly determined by long-distance transport of chemical substances and their wet and dry deposition on the glacier surface. This paper concerns spatial distribution of metals, ions, and dissolved organic carbon, as well as the differentiation of physicochemical parameters (pH, electrical conductivity) determined in ice surface samples collected from four Arctic glaciers during the summer season in 2012. The studied glaciers represent three different morphological types: ground based (Blomlibreen and Scottbreen), tidewater which evolved to ground based (Renardbreen), and typical tidewater glacier (Recherchebreen). All of the glaciers are functioning as a glacial system and hence are subject to the same physical processes (melting, freezing) and the process of ice flowing resulting from the cross-impact force of gravity and topographic conditions. According to this hypothesis, the article discusses the correlation between morphometric parameters, changes in mass balance, geological characteristics of the glaciers and the spatial distribution of analytes on the surface of ice. A strong correlation (r = 0.63) is recorded between the aspect of glaciers and values of pH and ions, whereas dissolved organic carbon (DOC) depends on the minimum elevation of glaciers (r = 0.55) and most probably also on the development of the accumulation area. The obtained results suggest that although certain morphometric parameters largely determine the spatial distribution of analytes, also the geology of the bed of glaciers strongly affects the chemism of the surface ice of glaciers in the phase of strong recession.
ERIC Educational Resources Information Center
Jaspers, Ellen; Desloovere, Kaat; Bruyninckx, Herman; Klingels, Katrijn; Molenaers, Guy; Aertbelien, Erwin; Van Gestel, Leen; Feys, Hilde
2011-01-01
The aim of this study was to measure which three-dimensional spatiotemporal and kinematic parameters differentiate upper limb movement characteristics in children with hemiplegic cerebral palsy (HCP) from those in typically developing children (TDC), during various clinically relevant tasks. We used a standardized protocol containing three reach…
ERIC Educational Resources Information Center
Morse, Anthony F.; Cangelosi, Angelo
2017-01-01
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between…
Dynamics in the Parameter Space of a Neuron Model
NASA Astrophysics Data System (ADS)
Paulo, C. Rech
2012-06-01
Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Peter, Beate; Matsushita, Mark; Raskind, Wendy H.
2013-01-01
Purpose To investigate processing speed as a latent dimension in children with dyslexia and children and adults with typical reading skills. Method Exploratory factor analysis (FA) was based on a sample of multigenerational families, each ascertained through a child with dyslexia. Eleven measures—6 of them timed—represented verbal and nonverbal processes, alphabet writing, and motor sequencing in the hand and oral motor system. FA was conducted in 4 cohorts (all children, a subset of children with low reading scores, a subset of children with typical reading scores, and adults with typical reading scores; total N = 829). Results Processing speed formed the first factor in all cohorts. Both measures of motor sequencing speed loaded on the speed factor with the other timed variables. Children with poor reading scores showed lower speed factor scores than did typical peers. The speed factor was negatively correlated with age in the adults. Conclusions The speed dimension was observed independently of participant cohort, gender, and reading ability. Results are consistent with a unified theory of processing speed as a quadratic function of age in typical development and with slowed processing in poor readers. PMID:21081672
Commercialization of New Beam Applications
NASA Astrophysics Data System (ADS)
McKeown, Joseph
1996-05-01
The commercialization of electron processing applications is driven by demonstrated technical advantages over current practice. Mature and reliable accelerator technology has permitted more consistent product quality and the development of new processes. However, the barriers to commercial adoption are often not amenable to solution within the laboratory alone. Aspects of the base accelerator technology, plant engineering, production, project management, financing, regulatory control, product throughput and plant operational efficiency all contribute to the business risk. Experiences in building three 10 MeV, 50 kW, IMPELA electron accelerators at approximately 8 M each and achieving cumulative operational availability greater than 98% in commercial environments have identified key parameters defining those aspects. The allowed ranges of these parameters to generate the 1.5 M annual revenue that is typically necessary to support outlays of this scale are presented. Such data have been used in proposals to displace expensive chemicals in the viscose industry, sterilize sewage sludge, detoxify chemically contaminated soils and build radiation service centers for a diversity of applications. The proposals face stiff competition from traditional chemical methods. Quantitative technical and business details of these activities are provided and an attempt is made to establish realistic expectations for the exploitation of electron beam technologies in emerging applications.
Solid state light engines for bioanalytical instruments and biomedical devices
NASA Astrophysics Data System (ADS)
Jaffe, Claudia B.; Jaffe, Steven M.
2010-02-01
Lighting subsystems to drive 21st century bioanalysis and biomedical diagnostics face stringent requirements. Industrywide demands for speed, accuracy and portability mean illumination must be intense as well as spectrally pure, switchable, stable, durable and inexpensive. Ideally a common lighting solution could service these needs for numerous research and clinical applications. While this is a noble objective, the current technology of arc lamps, lasers, LEDs and most recently light pipes have intrinsic spectral and angular traits that make a common solution untenable. Clearly a hybrid solution is required to service the varied needs of the life sciences. Any solution begins with a critical understanding of the instrument architecture and specifications for illumination regarding power, illumination area, illumination and emission wavelengths and numerical aperture. Optimizing signal to noise requires careful optimization of these parameters within the additional constraints of instrument footprint and cost. Often the illumination design process is confined to maximizing signal to noise without the ability to adjust any of the above parameters. A hybrid solution leverages the best of the existing lighting technologies. This paper will review the design process for this highly constrained, but typical optical optimization scenario for numerous bioanalytical instruments and biomedical devices.
Polyferric sulphate: preparation, characterisation and application in coagulation experiments.
Zouboulis, A I; Moussas, P A; Vasilakou, F
2008-07-15
The process of coagulation is a core environmental protection technology, which is mainly used in the water or wastewater treatment facilities. Research is now focused on the development of inorganic pre-polymerised coagulants. A characteristic example is PFS (polyferric sulphate), a relatively new pre-polymerised inorganic coagulant with high cationic charge. In this paper, the role of major parameters, including temperature, types of chemical reagents, ratio r=[OH]/[Fe], rate of base addition in the preparation stages of PFS were investigated. Furthermore, the prepared PFS was characterised based on typical properties, such as the percentage of the polymerised iron present in the compound, z-potential, pH, etc. Moreover, dynamics of coagulation process were examined by means of the Photometric Dispersion Analyzer (PDA). Finally, the coagulation efficiency of PFS in treating kaolin suspension and biologically pre-treated wastewater was evaluated in comparison with the respective conventional coagulant agent. The results indicate that certain parameters, such as the r value, the rate of base addition and the duration and temperature of the polymerisation stage, significantly affected the properties of the PFS. Additionally, the prepared PFS polymerised coagulants exhibit a significantly better coagulation performance than the respective non-polymerised one, i.e. ferric sulphate.
Magnetization Dynamics of Amorphous Ribbons and Wires Studied by Inductance Spectroscopy
Betancourt, Israel
2010-01-01
Inductance spectroscopy is a particular formulation variant of the well known complex impedance formalism typically used for the electric characterization of dielectric, ferroelectric, and piezoelectric materials. It has been successfully exploited as a versatile tool for characterization of the magnetization dynamics in amorphous ribbons and wires by means of simple experiments involving coils for sample holding and impedance analyzer equipment. This technique affords the resolution of the magnetization processes in soft magnetic materials, in terms of reversible deformation of pinned domain walls, domain wall displacements and spin rotation, for which characteristic parameters such as the alloy initial permeability and the relaxation frequencies, indicating the dispersion of each process, can be defined. Additionally, these parameters can be correlated with chemical composition variation, size effects and induced anisotropies, leading to a more physical insight for the understanding of the frequency dependent magnetic response of amorphous alloys, which is of prime interest for the development of novel applications in the field of telecommunication and sensing technologies. In this work, a brief overview, together with recent progress on the magnetization dynamics of amorphous ribbons, wires, microwires and biphase wires, is presented and discussed for the intermediate frequency interval between 10 Hz and 13 MHz. PMID:28879975
Palavecino Prpich, Noelia Z; Castro, Marcela P; Cayré, María E; Garro, Oscar A; Vignolo, Graciela M
2015-01-01
Lactic acid bacteria (LAB) and coagulase negative cocci (CNC) were isolated from artisanal dry sausages sampled from the northeastern region of Chaco, Argentina. In order to evaluate their performance in situ and considering technological features of the isolated strains, two mixed selected autochthonous starter cultures (SAS) were designed: (i) SAS-1 (Lactobacillus sakei 487 + Staphylococcus vitulinus C2) and (ii) SAS-2 (L. sakei 442 + S. xylosus C8). Cultures were introduced into dry sausage manufacturing process at a local small-scale facility. Microbiological and physicochemical parameters were monitored throughout fermentation and ripening periods, while sensory attributes of the final products were evaluated by a trained panel. Lactic acid bacteria revealed their ability to colonize and adapt properly to the meat matrix, inhibiting the growth of spontaneous microflora and enhancing safety and hygienic profile of the products. Both SAS showed a beneficial effect on lipid oxidation and texture of the final products. Staphylococcus vitulinus C2, from SAS-1, promoted a better redness of the final product. Sensory profile revealed that SAS addition preserved typical sensory attributes. Introduction of these cultures could provide an additional tool to standardize manufacturing processes aiming to enhance safety and quality while keeping typical sensory attributes of regional dry fermented sausages.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
Technology CAD for integrated circuit fabrication technology development and technology transfer
NASA Astrophysics Data System (ADS)
Saha, Samar
2003-07-01
In this paper systematic simulation-based methodologies for integrated circuit (IC) manufacturing technology development and technology transfer are presented. In technology development, technology computer-aided design (TCAD) tools are used to optimize the device and process parameters to develop a new generation of IC manufacturing technology by reverse engineering from the target product specifications. While in technology transfer to manufacturing co-location, TCAD is used for process centering with respect to high-volume manufacturing equipment of the target manufacturing equipment of the target manufacturing facility. A quantitative model is developed to demonstrate the potential benefits of the simulation-based methodology in reducing the cycle time and cost of typical technology development and technology transfer projects over the traditional practices. The strategy for predictive simulation to improve the effectiveness of a TCAD-based project, is also discussed.
Extending simulation modeling to activity-based costing for clinical procedures.
Glick, N D; Blackmore, C C; Zelman, W N
2000-04-01
A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.
Numerical models of cell death in RF ablation with monopolar and bipolar probes
NASA Astrophysics Data System (ADS)
Bright, Benjamin M.; Pearce, John A.
2013-02-01
Radio frequency (RF) is used clinically to treat unresectible tumors. Finite element modeling has proven useful in treatment planning and applicator design. Typically isotherms in the middle 50s °C have been used as the parameter of assessment in these models. We compare and contrast isotherms for multiple known Arrhenius thermal damage predictors including collagen denaturation, vascular disruption, liver coagulation and cell death. Models for RITA probe geometries are included in the study. Comparison to isotherms is sensible when the activation time is held constant, but varies considerably when heating times vary. The purpose of this paper is to demonstrate the importance of looking at specific processes and keeping track of the methods used to derive the Arrhenius coefficients in order to study the extremely complex cell death processes due to thermal therapies.
Photon merging and splitting in electromagnetic field inhomogeneities
NASA Astrophysics Data System (ADS)
Gies, Holger; Karbstein, Felix; Seegert, Nico
2016-04-01
We investigate photon merging and splitting processes in inhomogeneous, slowly varying electromagnetic fields. Our study is based on the three-photon polarization tensor following from the Heisenberg-Euler effective action. We put special emphasis on deviations from the well-known constant field results, also revisiting the selection rules for these processes. In the context of high-intensity laser facilities, we analytically determine compact expressions for the number of merged/split photons as obtained in the focal spots of intense laser beams. For the parameter range of typical petawatt class laser systems as pump and probe, we provide estimates for the numbers of signal photons attainable in an actual experiment. The combination of frequency upshifting, polarization dependence and scattering off the inhomogeneities renders photon merging an ideal signature for the experimental exploration of nonlinear quantum vacuum properties.
Enhanced Characterization of Niobium Surface Topography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Xu, Hui Tian, Charles Reece, Michael Kelley
2011-12-01
Surface topography characterization is a continuing issue for the Superconducting Radio Frequency (SRF) particle accelerator community. Efforts are underway to both to improve surface topography, and its characterization and analysis using various techniques. In measurement of topography, Power Spectral Density (PSD) is a promising method to quantify typical surface parameters and develop scale-specific interpretations. PSD can also be used to indicate how chemical processes modifiesy the roughnesstopography at different scales. However, generating an accurate and meaningful topographic PSD of an SRF surface requires careful analysis and optimization. In this report, polycrystalline surfaces with different process histories are sampled with AFMmore » and stylus/white light interferometer profilometryers and analyzed to indicate trace topography evolution at different scales. evolving during etching or polishing. Moreover, Aan optimized PSD analysis protocol will be offered to serve the SRF surface characterization needs is presented.« less
Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin
2015-05-01
The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Drescher, Philipp; Sarhan, Mohamed; Seitz, Hermann
2016-12-01
Selective electron beam melting (SEBM) is a relatively new additive manufacturing technology for metallic materials. Specific to this technology is the sintering of the metal powder prior to the melting process. The sintering process has disadvantages for post-processing. The post-processing of parts produced by SEBM typically involves the removal of semi-sintered powder through the use of a powder blasting system. Furthermore, the sintering of large areas before melting decreases productivity. Current investigations are aimed at improving the sintering process in order to achieve better productivity, geometric accuracy, and resolution. In this study, the focus lies on the modification of the sintering process. In order to investigate and improve the sintering process, highly porous titanium test specimens with various scan speeds were built. The aim of this study was to decrease build time with comparable mechanical properties of the components and to remove the residual powder more easily after a build. By only sintering the area in which the melt pool for the components is created, an average productivity improvement of approx. 20% was achieved. Tensile tests were carried out, and the measured mechanical properties show comparatively or slightly improved values compared with the reference.
NASA Astrophysics Data System (ADS)
Jupé, M.; Mende, M.; Kolleck, C.; Ristau, D.; Gallais, L.; Mangote, B.
2011-12-01
The femto-second technology gains of increasing importance in industrial applications. In this context, a new generation of compact and low cost laser sources has to be provided on a commercial basis. Typical pulse durations of these sources are specified in the range from a few hundred femtoup to some pico-seconds, and typical wavelengths are centered around 1030-1080nm. As a consequence, also the demands imposed on high power optical components for these laser sources are rapidly increasing, especially in respect to their power handling capability in the ultra-short pulse range. The present contribution is dedicated to some aspects for improving this quality parameter of optical coatings. The study is based on a set of hafnia and silica mixtures with different compositions and optical band gaps. This material combination displays under ultra-short pulse laser irradiation effects, which are typically for thermal processes. For instance, melting had been observed in the morphology of damaged sides. In this context, models for a prediction of the laser damage thresholds and scaling laws are scrutinized, and have been modified calculating the energy of the electron ensemble. Furthermore, a simple first order approach for the calculation of the temperature was included.
An in-mold packaging process for plastic fluidic devices.
Yoo, Y E; Lee, K H; Je, T J; Choi, D S; Kim, S K
2011-01-01
Micro or nanofluidic devices have many channel shapes to deliver chemical solutions, body fluids or any fluids. The channels in these devices should be covered to prevent the fluids from overflowing or leaking. A typical method to fabricate an enclosed channel is to bond or weld a cover plate to a channel plate. This solid-to-solid bonding process, however, takes a considerable amount of time for mass production. In this study, a new process for molding a cover layer that can enclose open micro or nanochannels without solid-to-solid bonding is proposed and its feasibility is estimated. First, based on the design of a model microchannel, a brass microchannel master core was machined and a plastic microchannel platform was injection-molded. Using this molded platform, a series of experiments was performed for four process or mold design parameters. Some feasible conditions were successfully found to enclosed channels without filling the microchannels for the injection molding of a cover layer over the plastic microchannel platform. In addition, the bond strength and seal performance were estimated in a comparison with those done by conventional bonding or welding processes.
Advanced Control Synthesis for Reverse Osmosis Water Desalination Processes.
Phuc, Bui Duc Hong; You, Sam-Sang; Choi, Hyeung-Six; Jeong, Seok-Kwon
2017-11-01
In this study, robust control synthesis has been applied to a reverse osmosis desalination plant whose product water flow and salinity are chosen as two controlled variables. The reverse osmosis process has been selected to study since it typically uses less energy than thermal distillation. The aim of the robust design is to overcome the limitation of classical controllers in dealing with large parametric uncertainties, external disturbances, sensor noises, and unmodeled process dynamics. The analyzed desalination process is modeled as a multi-input multi-output (MIMO) system with varying parameters. The control system is decoupled using a feed forward decoupling method to reduce the interactions between control channels. Both nominal and perturbed reverse osmosis systems have been analyzed using structured singular values for their stabilities and performances. Simulation results show that the system responses meet all the control requirements against various uncertainties. Finally the reduced order controller provides excellent robust performance, with achieving decoupling, disturbance attenuation, and noise rejection. It can help to reduce the membrane cleanings, increase the robustness against uncertainties, and lower the energy consumption for process monitoring.
NASA Astrophysics Data System (ADS)
Bin, Wang; Dong, Shiyun; Yan, Shixing; Gang, Xiao; Xie, Zhiwei
2018-03-01
Picosecond laser has ultrashort pulse width and ultrastrong peak power, which makes it widely used in the field of micro-nanoscale fabrication. polydimethylsiloxane (PDMS) is a typical silicone elastomer with good hydrophobicity. In order to further improve the hydrophobicity of PDMS, the picosecond laser was used to fabricate a grid-like microstructure on the surface of PDMS, and the relationship between hydrophobicity of PDMS with surface microstructure and laser processing parameters, such as processing times and cell spacing was studied. The results show that: compared with the unprocessed PDMS, the presence of surface microstructure significantly improved the hydrophobicity of PDMS. When the number of processing is constant, the hydrophobicity of PDMS decreases with the increase of cell spacing. However, when the cell spacing is fixed, the hydrophobicity of PDMS first increases and then decreases with the increase of processing times. In particular, when the times of laser processing is 6 and the cell spacing is 50μm, the contact angle of PDMS increased from 113° to 154°, which reached the level of superhydrophobic.
Kljajic, Alen; Bester-Rogac, Marija; Klobcar, Andrej; Zupet, Rok; Pejovnik, Stane
2013-02-01
The active pharmaceutical ingredient orlistat is usually manufactured using a semi-synthetic procedure, producing crude product and complex mixtures of highly related impurities with minimal side-chain structure variability. It is therefore crucial for the overall success of industrial/pharmaceutical application to develop an effective purification process. In this communication, we present the newly developed water-in-oil reversed micelles and microemulsion system-based crystallization process. Physiochemical properties of the presented crystallization media were varied through surfactants and water composition, and the impact on efficiency was measured through final variation of these two parameters. Using precisely defined properties of the dispersed water phase in crystallization media, a highly efficient separation process in terms of selectivity and yield was developed. Small-angle X-ray scattering, high-performance liquid chromatography, mass spectrometry, and scanning electron microscopy were used to monitor and analyze the separation processes and orlistat products obtained. Typical process characteristics, especially selectivity and yield in regard to reference examples, were compared and discussed. Copyright © 2012 Wiley Periodicals, Inc.
Kleyböcker, A; Liebrich, M; Verstraete, W; Kraume, M; Würdemann, H
2012-11-01
Early warning indicators for process failures were investigated to develop a reliable method to increase the production efficiency of biogas plants. Organic overloads by the excessive addition of rapeseed oil were used to provoke the decrease in the gas production rate. Besides typical monitoring parameters, as pH, methane and hydrogen contents, biogas production rate and concentrations of fatty acids; carbon dioxide content, concentrations of calcium and phosphate were monitored. The concentration ratio of volatile fatty acids to calcium acted as an early warning indicator (EWI-VFA/Ca). The EWI-VFA/Ca always clearly and reliably indicated a process imbalance by exhibiting a 2- to 3-fold increase 3-7days before the process failure occurred. At this time, it was still possible to take countermeasures successfully. Furthermore, increases in phosphate concentration and in the concentration ratio of phosphate to calcium also indicated a process failure, in some cases, even earlier than the EWI-VFA/Ca. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Enell, Carl-Fredrik; Kozlovsky, Alexander; Turunen, Tauno; Ulich, Thomas; Välitalo, Sirkku; Scotto, Carlo; Pezzopane, Michael
2016-03-01
This paper presents a comparison between standard ionospheric parameters manually and automatically scaled from ionograms recorded at the high-latitude Sodankylä Geophysical Observatory (SGO, ionosonde SO166, 64.1° geomagnetic latitude), located in the vicinity of the auroral oval. The study is based on 2610 ionograms recorded during the period June-December 2013. The automatic scaling was made by means of the Autoscala software. A few typical examples are shown to outline the method, and statistics are presented regarding the differences between manually and automatically scaled values of F2, F1, E and sporadic E (Es) layer parameters. We draw the conclusions that: 1. The F2 parameters scaled by Autoscala, foF2 and M(3000)F2, are reliable. 2. F1 is identified by Autoscala in significantly fewer cases (about 50 %) than in the manual routine, but if identified the values of foF1 are reliable. 3. Autoscala frequently (30 % of the cases) detects an E layer when the manual scaling process does not. When identified by both methods, the Autoscala E-layer parameters are close to those manually scaled, foE agreeing to within 0.4 MHz. 4. Es and parameters of Es identified by Autoscala are in many cases different from those of the manual scaling. Scaling of Es at auroral latitudes is often a difficult task.
NASA Astrophysics Data System (ADS)
Wilson, John P.
2012-01-01
This article examines how the methods and data sources used to generate DEMs and calculate land surface parameters have changed over the past 25 years. The primary goal is to describe the state-of-the-art for a typical digital terrain modeling workflow that starts with data capture, continues with data preprocessing and DEM generation, and concludes with the calculation of one or more primary and secondary land surface parameters. The article first describes some of ways in which LiDAR and RADAR remote sensing technologies have transformed the sources and methods for capturing elevation data. It next discusses the need for and various methods that are currently used to preprocess DEMs along with some of the challenges that confront those who tackle these tasks. The bulk of the article describes some of the subtleties involved in calculating the primary land surface parameters that are derived directly from DEMs without additional inputs and the two sets of secondary land surface parameters that are commonly used to model solar radiation and the accompanying interactions between the land surface and the atmosphere on the one hand and water flow and related surface processes on the other. It concludes with a discussion of the various kinds of errors that are embedded in DEMs, how these may be propagated and carried forward in calculating various land surface parameters, and the consequences of this state-of-affairs for the modern terrain analyst.
NASA Astrophysics Data System (ADS)
Zhu, Na
This thesis presents an overview of the previous research work on dynamic characteristics and energy performance of buildings due to the integration of PCMs. The research work on dynamic characteristics and energy performance of buildings using PCMs both with and without air-conditioning is reviewed. Since the particular interest in using PCMs for free cooling and peak load shifting, specific research efforts on both subjects are reviewed separately. A simplified physical dynamic model of building structures integrated with SSPCM (shaped-stabilized phase change material) is developed and validated in this study. The simplified physical model represents the wall by 3 resistances and 2 capacitances and the PCM layer by 4 resistances and 2 capacitances respectively while the key issue is the parameter identification of the model. This thesis also presents the studies on the thermodynamic characteristics of buildings enhanced by PCM and on the investigation of the impacts of PCM on the building cooling load and peak cooling demand at different climates and seasons as well as the optimal operation and control strategies to reduce the energy consumption and energy cost by reducing the air-conditioning energy consumption and peak load. An office building floor with typical variable air volume (VAV) air-conditioning system is used and simulated as the reference building in the comparison study. The envelopes of the studied building are further enhanced by integrating the PCM layers. The building system is tested in two selected cities of typical climates in China including Hong Kong and Beijing. The cold charge and discharge processes, the operation and control strategies of night ventilation and the air temperature set-point reset strategy for minimizing the energy consumption and electricity cost are studied. This thesis presents the simulation test platform, the test results on the cold storage and discharge processes, the air-conditioning energy consumption and demand reduction potentials in typical air-conditioning seasons in typical China cites as well as the impacts of operation and control strategies.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2008-01-01
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
Dasgupta, Nilanjan; Carin, Lawrence
2005-04-01
Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.
The Effects of Earth's Outer Core's Viscosity on Geodynamo Models
NASA Astrophysics Data System (ADS)
Dong, C.; Jiao, L.; Zhang, H.
2017-12-01
Geodynamo process is controlled by mathematic equations and input parameters. To study effects of parameters on geodynamo system, MoSST model has been used to simulate geodynamo outputs under different outer core's viscosity ν. With spanning ν for nearly three orders when other parameters fixed, we studied the variation of each physical field and its typical length scale. We find that variation of ν affects the velocity field intensely. The magnetic field almost decreases monotonically with increasing of ν, while the variation is no larger than 30%. The temperature perturbation increases monotonically with ν, but by a very small magnitude (6%). The averaged velocity field (u) of the liquid core increases with ν as a simple fitted scaling relation: u∝ν0.49. The phenomenon that u increases with ν is essentially that increasing of ν breaks the Taylor-Proudman constraint and drops the critical Rayleigh number, and thus u increases under the same thermal driving force. Forces balance is analyzed and balance mode shifts with variation of ν. When compared with former studies of scaling laws, this study supports the conclusion that in a certain parameter range, the magnetic field strength doesn't vary much with the viscosity, but opposes to the assumption that the velocity field has nothing to do with the outer core viscosity.
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
Rrsm: The European Rapid Raw Strong-Motion Database
NASA Astrophysics Data System (ADS)
Cauzzi, C.; Clinton, J. F.; Sleeman, R.; Domingo Ballesta, J.; Kaestli, P.; Galanis, O.
2014-12-01
We introduce the European Rapid Raw Strong-Motion database (RRSM), a Europe-wide system that provides parameterised strong motion information, as well as access to waveform data, within minutes of the occurrence of strong earthquakes. The RRSM significantly differs from traditional earthquake strong motion dissemination in Europe, which has focused on providing reviewed, processed strong motion parameters, typically with significant delays. As the RRSM provides rapid open access to raw waveform data and metadata and does not rely on external manual waveform processing, RRSM information is tailored to seismologists and strong-motion data analysts, earthquake and geotechnical engineers, international earthquake response agencies and the educated general public. Access to the RRSM database is via a portal at http://www.orfeus-eu.org/rrsm/ that allows users to query earthquake information, peak ground motion parameters and amplitudes of spectral response; and to select and download earthquake waveforms. All information is available within minutes of any earthquake with magnitude ≥ 3.5 occurring in the Euro-Mediterranean region. Waveform processing and database population are performed using the waveform processing module scwfparam, which is integrated in SeisComP3 (SC3; http://www.seiscomp3.org/). Earthquake information is provided by the EMSC (http://www.emsc-csem.org/) and all the seismic waveform data is accessed at the European Integrated waveform Data Archive (EIDA) at ORFEUS (http://www.orfeus-eu.org/index.html), where all on-scale data is used in the fully automated processing. As the EIDA community is continually growing, the already significant number of strong motion stations is also increasing and the importance of this product is expected to also increase. Real-time RRSM processing started in June 2014, while past events have been processed in order to provide a complete database back to 2005.
ERIC Educational Resources Information Center
Maxwell, Scott E.; Cole, David A.; Mitchell, Melissa A.
2011-01-01
Maxwell and Cole (2007) showed that cross-sectional approaches to mediation typically generate substantially biased estimates of longitudinal parameters in the special case of complete mediation. However, their results did not apply to the more typical case of partial mediation. We extend their previous work by showing that substantial bias can…
Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis
ERIC Educational Resources Information Center
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia
2016-01-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
NASA Astrophysics Data System (ADS)
Scharnagl, Benedikt; Durner, Wolfgang
2013-04-01
Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.
Merello, Paloma; García-Diego, Fernando-Juan; Zarzo, Manuel
2014-08-01
Chemometrics has been applied successfully since the 1990s for the multivariate statistical control of industrial processes. A new area of interest for these tools is the microclimatic monitoring of cultural heritage. Sensors record climatic parameters over time and statistical data analysis is performed to obtain valuable information for preventive conservation. A case study of an open-air archaeological site is presented here. A set of 26 temperature and relative humidity data-loggers was installed in four rooms of Ariadne's house (Pompeii). If climatic values are recorded versus time at different positions, the resulting data structure is equivalent to records of physical parameters registered at several points of a continuous chemical process. However, there is an important difference in this case: continuous processes are controlled to reach a steady state, whilst open-air sites undergo tremendous fluctuations. Although data from continuous processes are usually column-centred prior to applying principal components analysis, it turned out that another pre-treatment (row-centred data) was more convenient for the interpretation of components and to identify abnormal patterns. The detection of typical trajectories was more straightforward by dividing the whole monitored period into several sub-periods, because the marked climatic fluctuations throughout the year affect the correlation structures. The proposed statistical methodology is of interest for the microclimatic monitoring of cultural heritage, particularly in the case of open-air or semi-confined archaeological sites. Copyright © 2014 Elsevier B.V. All rights reserved.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
NASA Astrophysics Data System (ADS)
Bean, Glenn E.; Witkin, David B.; McLouth, Tait D.; Zaldivar, Rafael J.
2018-02-01
Research on the selective laser melting (SLM) method of laser powder bed fusion additive manufacturing (AM) has shown that surface and internal quality of AM parts is directly related to machine settings such as laser energy density, scanning strategies, and atmosphere. To optimize laser parameters for improved component quality, the energy density is typically controlled via laser power, scanning rate, and scanning strategy, but can also be controlled by changing the spot size via laser focal plane shift. Present work being conducted by The Aerospace Corporation was initiated after observing inconsistent build quality of parts printed using OEM-installed settings. Initial builds of Inconel 718 witness geometries using OEM laser parameters were evaluated for surface roughness, density, and porosity while varying energy density via laser focus shift. Based on these results, hardware and laser parameter adjustments were conducted in order to improve build quality and consistency. Tensile testing was also conducted to investigate the effect of build plate location and laser settings on SLM 718. This work has provided insight into the limitations of OEM parameters compared with optimized parameters towards the goal of manufacturing aerospace-grade parts, and has led to the development of a methodology for laser parameter tuning that can be applied to other alloy systems. Additionally, evidence was found that for 718, which derives its strength from post-manufacturing heat treatment, there is a possibility that tensile testing may not be perceptive to defects which would reduce component performance. Ongoing research is being conducted towards identifying appropriate testing and analysis methods for screening and quality assurance.
Toward a Micro-Scale Acoustic Direction-Finding Sensor with Integrated Electronic Readout
2013-06-01
measurements with curve fits . . . . . . . . . . . . . . . 20 Figure 2.10 Failure testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22...2.1 Sensor parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Table 2.2 Curve fit parameters...elastic, the quantity of interest is the elastic stiffness. In a typical nanoindentation test, the loading curve is nonlinear due to combined plastic
Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies
ERIC Educational Resources Information Center
Smith, Carrie E.; Cribbie, Robert A.
2013-01-01
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…
Combustion and Flammability Characteristics of Solids at Microgravity in very Small Velocity Flows
NASA Technical Reports Server (NTRS)
Sanchez-Tarifa, C.; Rodriguez, M.
1999-01-01
Fires still remain as one of the most important safety risks in manned spacecraft. This problem will become even more important in long endurance non orbital flights in which maintenance will be non existing or very difficult. The basic process of a fire is the combustion of a solid at microgravity conditions in O2/N2 mixtures. Although a large number of research programs have been carried out on this problem, especially on flame spreading, several aspects of these processes are not yet well understood. It may be mentioned, for example, the temperature and characteristic of low emissivity flames in the visual range that take place in some conditions at microgravity; and there exists a lack of knowledge on the influence of key parameters, such as convective flow velocities of the order of magnitude of typical oxygen diffusion velocities. The "Departamento de Motopropulsion y Termofluidodinamica" of the "Universidad Politecnica de Madrid, Escuela Tecnica Superior de Ingenieros Aeronauticos" is conducting a research program on the combustion of solids at reduced gravity conditions within O2/N2 mixtures. The material utilized has been polymethylmethacrylate (PMMA) in the form of rectangular slabs and hollow cylinders. The main parameters of the process have been small convective flow velocities (including velocity angle with the direction of the spreading flame) and oxygen concentration. Some results have also been obtained on the influence of material thickness and gas pressure.
NASA Astrophysics Data System (ADS)
Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.
2013-08-01
Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.
An adaptive spatio-temporal Gaussian filter for processing cardiac optical mapping data.
Pollnow, S; Pilia, N; Schwaderlapp, G; Loewe, A; Dössel, O; Lenis, G
2018-06-04
Optical mapping is widely used as a tool to investigate cardiac electrophysiology in ex vivo preparations. Digital filtering of fluorescence-optical data is an important requirement for robust subsequent data analysis and still a challenge when processing data acquired from thin mammalian myocardium. Therefore, we propose and investigate the use of an adaptive spatio-temporal Gaussian filter for processing optical mapping signals from these kinds of tissue usually having low signal-to-noise ratio (SNR). We demonstrate how filtering parameters can be chosen automatically without additional user input. For systematic comparison of this filter with standard filtering methods from the literature, we generated synthetic signals representing optical recordings from atrial myocardium of a rat heart with varying SNR. Furthermore, all filter methods were applied to experimental data from an ex vivo setup. Our developed filter outperformed the other filter methods regarding local activation time detection at SNRs smaller than 3 dB which are typical noise ratios expected in these signals. At higher SNRs, the proposed filter performed slightly worse than the methods from literature. In conclusion, the proposed adaptive spatio-temporal Gaussian filter is an appropriate tool for investigating fluorescence-optical data with low SNR. The spatio-temporal filter parameters were automatically adapted in contrast to the other investigated filters. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cook, Grant O.; Sorensen, Carl D.
2013-12-01
Partial transient liquid-phase (PTLP) bonding is currently an esoteric joining process with limited applications. However, it has preferable advantages compared with typical joining techniques and is the best joining technique for certain applications. Specifically, it can bond hard-to-join materials as well as dissimilar material types, and bonding is performed at comparatively low temperatures. Part of the difficulty in applying PTLP bonding is finding suitable interlayer combinations (ICs). A novel interlayer selection procedure has been developed to facilitate the identification of ICs that will create successful PTLP bonds and is explained in a companion article. An integral part of the selection procedure is a filtering routine that identifies all possible ICs for a given application. This routine utilizes a set of customizable parameters that are based on key characteristics of PTLP bonding. These parameters include important design considerations such as bonding temperature, target remelting temperature, bond solid type, and interlayer thicknesses. The output from this routine provides a detailed view of each candidate IC along with a broad view of the entire candidate set, greatly facilitating the selection of ideal ICs. This routine provides a new perspective on the PTLP bonding process. In addition, the use of this routine, by way of the accompanying selection procedure, will expand PTLP bonding as a viable joining process.
Investigating low flow process controls, through complex modelling, in a UK chalk catchment
NASA Astrophysics Data System (ADS)
Lubega Musuuza, Jude; Wagener, Thorsten; Coxon, Gemma; Freer, Jim; Woods, Ross; Howden, Nicholas
2017-04-01
The typical streamflow response of Chalk catchments is dominated by groundwater contributions due the high degree of groundwater recharge through preferential flow pathways. The groundwater store attenuates the precipitation signal, which causes a delay between the corresponding high and low extremes in the precipitation and the stream flow signals. Streamflow responses can therefore be quite out of phase with the precipitation input to a Chalk catchment. Therefore characterising such catchment systems, including modelling approaches, clearly need to reproduce these percolation and groundwater dominated pathways to capture these dominant flow pathways. The simulation of low flow conditions for chalk catchments in numerical models is especially difficult due to the complex interactions between various processes that may not be adequately represented or resolved in the models. Periods of low stream flows are particularly important due to competing water uses in the summer, including agriculture and water supply. In this study we apply and evaluate the physically-based Pennstate Integrated Hydrologic Model (PIHM) to the River Kennet, a sub-catchment of the Thames Basin, to demonstrate how the simulations of a chalk catchment are improved by a physically-based system representation. We also use an ensemble of simulations to investigate the sensitivity of various hydrologic signatures (relevant to low flows and droughts) to the different parameters in the model, thereby inferring the levels of control exerted by the processes that the parameters represent.
Data processing, multi-omic pathway mapping, and metabolite activity analysis using XCMS Online
Forsberg, Erica M; Huan, Tao; Rinehart, Duane; Benton, H Paul; Warth, Benedikt; Hilmers, Brian; Siuzdak, Gary
2018-01-01
Systems biology is the study of complex living organisms, and as such, analysis on a systems-wide scale involves the collection of information-dense data sets that are representative of an entire phenotype. To uncover dynamic biological mechanisms, bioinformatics tools have become essential to facilitating data interpretation in large-scale analyses. Global metabolomics is one such method for performing systems biology, as metabolites represent the downstream functional products of ongoing biological processes. We have developed XCMS Online, a platform that enables online metabolomics data processing and interpretation. A systems biology workflow recently implemented within XCMS Online enables rapid metabolic pathway mapping using raw metabolomics data for investigating dysregulated metabolic processes. In addition, this platform supports integration of multi-omic (such as genomic and proteomic) data to garner further systems-wide mechanistic insight. Here, we provide an in-depth procedure showing how to effectively navigate and use the systems biology workflow within XCMS Online without a priori knowledge of the platform, including uploading liquid chromatography (LCLC)–mass spectrometry (MS) data from metabolite-extracted biological samples, defining the job parameters to identify features, correcting for retention time deviations, conducting statistical analysis of features between sample classes and performing predictive metabolic pathway analysis. Additional multi-omics data can be uploaded and overlaid with previously identified pathways to enhance systems-wide analysis of the observed dysregulations. We also describe unique visualization tools to assist in elucidation of statistically significant dysregulated metabolic pathways. Parameter input takes 5–10 min, depending on user experience; data processing typically takes 1–3 h, and data analysis takes ~30 min. PMID:29494574
Mapping of multiple parameter m-health scenarios to mobile WiMAX QoS variables.
Alinejad, Ali; Philip, N; Istepanian, R S H
2011-01-01
Multiparameter m-health scenarios with bandwidth demanding requirements will be one of key applications in future 4 G mobile communication systems. These applications will potentially require specific spectrum allocations with higher quality of service requirements. Furthermore, one of the key 4 G technologies targeting m-health will be medical applications based on WiMAX systems. Hence, it is timely to evaluate such multiple parametric m-health scenarios over mobile WiMAX networks. In this paper, we address the preliminary performance analysis of mobile WiMAX network for multiparametric telemedical scenarios. In particular, we map the medical QoS to typical WiMAX QoS parameters to optimise the performance of these parameters in typical m-health scenario. Preliminary performance analyses of the proposed multiparametric scenarios are evaluated to provide essential information for future medical QoS requirements and constraints in these telemedical network environments.
LMI Based Robust Blood Glucose Regulation in Type-1 Diabetes Patient with Daily Multi-meal Ingestion
NASA Astrophysics Data System (ADS)
Mandal, S.; Bhattacharjee, A.; Sutradhar, A.
2014-04-01
This paper illustrates the design of a robust output feedback H ∞ controller for the nonlinear glucose-insulin (GI) process in a type-1 diabetes patient to deliver insulin through intravenous infusion device. The H ∞ design specification have been realized using the concept of linear matrix inequality (LMI) and the LMI approach has been used to quadratically stabilize the GI process via output feedback H ∞ controller. The controller has been designed on the basis of full 19th order linearized state-space model generated from the modified Sorensen's nonlinear model of GI process. The resulting controller has been tested with the nonlinear patient model (the modified Sorensen's model) in presence of patient parameter variations and other uncertainty conditions. The performance of the controller was assessed in terms of its ability to track the normoglycemic set point of 81 mg/dl with a typical multi-meal disturbance throughout a day that yields robust performance and noise rejection.
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birchfield, Adam; Schweitzer, Eran; Athari, Mir
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
Studying action representation in children via motor imagery.
Gabbard, Carl
2009-12-01
The use of motor imagery is a widely used experimental paradigm for the study of cognitive aspects of action planning and control in adults. Furthermore, there are indications that motor imagery provides a window into the process of action representation. These notions complement internal model theory suggesting that such representations allow predictions (estimates) about the mapping of the self to parameters of the external world; processes that enable successful planning and execution of action. The ability to mentally represent action is important to the development of motor control. This paper presents a critical review of motor imagery research conducted with children (typically developing and special populations) with focus on its merits and possible shortcomings in studying action representation. Included in the review are age-related findings, possible brain structures involved, experimental paradigms, and recommendations for future work. The merits of this review are associated with the apparent increasing attraction for using and studying motor imagery to understand the developmental aspects of action processing in children.
Gao, Jianbo; Hu, Jing; Mao, Xiang; Perc, Matjaž
2012-01-01
Culturomics was recently introduced as the application of high-throughput data collection and analysis to the study of human culture. Here, we make use of these data by investigating fluctuations in yearly usage frequencies of specific words that describe social and natural phenomena, as derived from books that were published over the course of the past two centuries. We show that the determination of the Hurst parameter by means of fractal analysis provides fundamental insights into the nature of long-range correlations contained in the culturomic trajectories, and by doing so offers new interpretations as to what might be the main driving forces behind the examined phenomena. Quite remarkably, we find that social and natural phenomena are governed by fundamentally different processes. While natural phenomena have properties that are typical for processes with persistent long-range correlations, social phenomena are better described as non-stationary, on–off intermittent or Lévy walk processes. PMID:22337632
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...
2017-08-19
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
Development of porous ceramsite from construction and demolition waste.
Wang, Chuan; Wu, Jian-Zhi; Zhang, Fu-Shen
2013-01-01
The disposal of construction and demolition (C&D) waste has become a serious problem in China due to the rapid increase of Chinese construction industry in recent years. In the present study, typical C&D waste was employed for ceramsite fabrication so as to find a new way for its effective recycling. A novel process was developed for manufacturing high-quality porous ceramsite according to the special chemical composition and properties of C&D waste. Most importantly, a unique bloating agent was developed for the porous structure formation since it was difficult to obtain a suitable porous structure using traditional bloating agents. The effects of processing parameters such as sintering temperature, heating rate and soaking time were investigated, and the bloating mechanism for ceramsite was discussed. The C&D waste ceramsite (CDWC), with high-intensity, low density and homogeneous mechanical properties, was much more suitable for application in the construction field. This study provides a practical process for efficient recycling of the rapidly increasing quantities of C&D waste.
NASA Astrophysics Data System (ADS)
Hupp, C. R.; Rinaldi, M.
2010-12-01
Many, if not most, streams have been mildly to severely affected by human disturbance, which complicates efforts to understand riparian ecosystems. Mediterranean regions have a long history of human influences including: dams, stream channelization, mining of sediment, and levee /canal construction. Typically these alterations reduce the ecosystem services that functioning floodplains provide and may negatively impact the natural ecology of floodplains through reductions in suitable habitats, biodiversity, and nutrient cycling. Additionally, human alterations typically shift affected streams away from a state of natural dynamic equilibrium, where net sediment deposition is approximately in balance with net erosion. Lack of equilibrium typically affects the degree to which floodplain ecosystems are connected to streamflow regime. Low connectivity, usually from human- or climate-induced incision, may result in reduced flow on floodplains and lowered water tables. High connectivity may result in severe sediment deposition. Connectivity has a direct impact on vegetation communities. Riparian vegetation distribution patterns and diversity relative to various fluvial geomorphic channel patterns, landforms, and processes are described and interpreted for selected rivers of Tuscany, Central Italy; with emphasis on channel evolution following human impacts. Multivariate analysis reveals distinct quantitative vegetation patterns related to six fluvial geomorphic surfaces. Analysis of vegetation data also shows distinct associations of plants with adjustment processes related to the stage of channel evolution. Plant distribution patterns coincide with disturbance/landform/soil moisture gradients. Species richness increases from channel bed to terrace and on heterogeneous riparian areas, while species richness decreases from moderate to intense incision and from low to intense narrowing. As a feedback mechanism, woody vegetation in particular may facilitate geomorphic recovery of floodplains by affecting sedimentation dynamics. Identification and understanding of critical fluvial parameters related to floodplain connectivity (e.g. stream gradient, grain-size, and hydrography) and spatial and temporal sediment deposition/erosion process trajectories should facilitate management efforts to retain and/or regain important ecosystem services.
NASA Astrophysics Data System (ADS)
Goetz-Weiss, L. R.; Herzfeld, U. C.; Trantow, T.; Hunke, E. C.; Maslanik, J. A.; Crocker, R. I.
2016-12-01
An important problem in model-data comparison is the identification of parameters that can be extracted from observational data as well as used in numerical models, which are typically based on idealized physical processes. Here, we present a suite of approaches to characterization and classification of sea ice and land ice types, properties and provinces based on several types of remote-sensing data. Applications will be given to not only illustrate the approach, but employ it in model evaluation and understanding of physical processes. (1) In a geostatistical characterization, spatial sea-ice properties in the Chukchi and Beaufort Sea and in Elsoon Lagoon are derived from analysis of RADARSAT and ERS-2 SAR data. (2) The analysis is taken further by utilizing multi-parameter feature vectors as inputs for unsupervised and supervised statistical classification, which facilitates classification of different sea-ice types. (3) Characteristic sea-ice parameters, as resultant from the classification, can then be applied in model evaluation, as demonstrated for the ridging scheme of the Los Alamos sea ice model, CICE, using high-resolution altimeter and image data collected from unmanned aircraft over Fram Strait during the Characterization of Arctic Sea Ice Experiment (CASIE). The characteristic parameters chosen in this application are directly related to deformation processes, which also underly the ridging scheme. (4) The method that is capable of the most complex classification tasks is the connectionist-geostatistical classification method. This approach has been developed to identify currently up to 18 different crevasse types in order to map progression of the surge through the complex Bering-Bagley Glacier System, Alaska, in 2011-2014. The analysis utilizes airborne altimeter data and video image data and satellite image data. Results of the crevasse classification are compare to fracture modeling and found to match.
Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia
2018-04-01
Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Inverse methods for 3D quantitative optical coherence elasticity imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Dong, Li; Wijesinghe, Philip; Hugenberg, Nicholas; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.
2017-02-01
In elastography, quantitative elastograms are desirable as they are system and operator independent. Such quantification also facilitates more accurate diagnosis, longitudinal studies and studies performed across multiple sites. In optical elastography (compression, surface-wave or shear-wave), quantitative elastograms are typically obtained by assuming some form of homogeneity. This simplifies data processing at the expense of smearing sharp transitions in elastic properties, and/or introducing artifacts in these regions. Recently, we proposed an inverse problem-based approach to compression OCE that does not assume homogeneity, and overcomes the drawbacks described above. In this approach, the difference between the measured and predicted displacement field is minimized by seeking the optimal distribution of elastic parameters. The predicted displacements and recovered elastic parameters together satisfy the constraint of the equations of equilibrium. This approach, which has been applied in two spatial dimensions assuming plane strain, has yielded accurate material property distributions. Here, we describe the extension of the inverse problem approach to three dimensions. In addition to the advantage of visualizing elastic properties in three dimensions, this extension eliminates the plane strain assumption and is therefore closer to the true physical state. It does, however, incur greater computational costs. We address this challenge through a modified adjoint problem, spatially adaptive grid resolution, and three-dimensional decomposition techniques. Through these techniques the inverse problem is solved on a typical desktop machine within a wall clock time of 20 hours. We present the details of the method and quantitative elasticity images of phantoms and tissue samples.
Wave data processing toolbox manual
Sullivan, Charlene M.; Warner, John C.; Martini, Marinna A.; Lightsom, Frances S.; Voulgaris, George; Work, Paul
2006-01-01
Researchers routinely deploy oceanographic equipment in estuaries, coastal nearshore environments, and shelf settings. These deployments usually include tripod-mounted instruments to measure a suite of physical parameters such as currents, waves, and pressure. Instruments such as the RD Instruments Acoustic Doppler Current Profiler (ADCP(tm)), the Sontek Argonaut, and the Nortek Aquadopp(tm) Profiler (AP) can measure these parameters. The data from these instruments must be processed using proprietary software unique to each instrument to convert measurements to real physical values. These processed files are then available for dissemination and scientific evaluation. For example, the proprietary processing program used to process data from the RD Instruments ADCP for wave information is called WavesMon. Depending on the length of the deployment, WavesMon will typically produce thousands of processed data files. These files are difficult to archive and further analysis of the data becomes cumbersome. More imperative is that these files alone do not include sufficient information pertinent to that deployment (metadata), which could hinder future scientific interpretation. This open-file report describes a toolbox developed to compile, archive, and disseminate the processed wave measurement data from an RD Instruments ADCP, a Sontek Argonaut, or a Nortek AP. This toolbox will be referred to as the Wave Data Processing Toolbox. The Wave Data Processing Toolbox congregates the processed files output from the proprietary software into two NetCDF files: one file contains the statistics of the burst data and the other file contains the raw burst data (additional details described below). One important advantage of this toolbox is that it converts the data into NetCDF format. Data in NetCDF format is easy to disseminate, is portable to any computer platform, and is viewable with public-domain freely-available software. Another important advantage is that a metadata structure is embedded with the data to document pertinent information regarding the deployment and the parameters used to process the data. Using this format ensures that the relevant information about how the data was collected and converted to physical units is maintained with the actual data. EPIC-standard variable names have been utilized where appropriate. These standards, developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) (http://www.pmel.noaa.gov/epic/), provide a universal vernacular allowing researchers to share data without translation.
The impact of non-Gaussianity upon cosmological forecasts
NASA Astrophysics Data System (ADS)
Repp, A.; Szapudi, I.; Carron, J.; Wolk, M.
2015-12-01
The primary science driver for 3D galaxy surveys is their potential to constrain cosmological parameters. Forecasts of these surveys' effectiveness typically assume Gaussian statistics for the underlying matter density, despite the fact that the actual distribution is decidedly non-Gaussian. To quantify the effect of this assumption, we employ an analytic expression for the power spectrum covariance matrix to calculate the Fisher information for Baryon Acoustic Oscillation (BAO)-type model surveys. We find that for typical number densities, at kmax = 0.5h Mpc-1, Gaussian assumptions significantly overestimate the information on all parameters considered, in some cases by up to an order of magnitude. However, after marginalizing over a six-parameter set, the form of the covariance matrix (dictated by N-body simulations) causes the majority of the effect to shift to the `amplitude-like' parameters, leaving the others virtually unaffected. We find that Gaussian assumptions at such wavenumbers can underestimate the dark energy parameter errors by well over 50 per cent, producing dark energy figures of merit almost three times too large. Thus, for 3D galaxy surveys probing the non-linear regime, proper consideration of non-Gaussian effects is essential.
Physical characteristics and resistance parameters of typical urban cyclists.
Tengattini, Simone; Bigazzi, Alexander York
2018-03-30
This study investigates the rolling and drag resistance parameters and bicycle and cargo masses of typical urban cyclists. These factors are important for modelling of cyclist speed, power and energy expenditure, with applications including exercise performance, health and safety assessments and transportation network analysis. However, representative values for diverse urban travellers have not been established. Resistance parameters were measured utilizing a field coast-down test for 557 intercepted cyclists in Vancouver, Canada. Masses were also measured, along with other bicycle attributes such as tire pressure and size. The average (standard deviation) of coefficient of rolling resistance, effective frontal area, bicycle plus cargo mass, and bicycle-only mass were 0.0077 (0.0036), 0.559 (0.170) m 2 , 18.3 (4.1) kg, and 13.7 (3.3) kg, respectively. The range of measured values is wider and higher than suggested in existing literature, which focusses on sport cyclists. Significant correlations are identified between resistance parameters and rider and bicycle attributes, indicating higher resistance parameters for less sport-oriented cyclists. The findings of this study are important for appropriately characterising the full range of urban cyclists, including commuters and casual riders.
Geometric Characterization of Multi-Axis Multi-Pinhole SPECT
DiFilippo, Frank P.
2008-01-01
A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574
Part-to-itself model inversion in process compensated resonance testing
NASA Astrophysics Data System (ADS)
Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Aldrin, John C.; Goodlet, Brent; Mazdiyasni, Siamack
2018-04-01
Process Compensated Resonance Testing (PCRT) is a non-destructive evaluation (NDE) method involving the collection and analysis of a part's resonance spectrum to characterize its material or damage state. Prior work used the finite element method (FEM) to develop forward modeling and model inversion techniques. In many cases, the inversion problem can become confounded by multiple parameters having similar effects on a part's resonance frequencies. To reduce the influence of confounding parameters and isolate the change in a part (e.g., creep), a part-to-itself (PTI) approach can be taken. A PTI approach involves inverting only the change in resonance frequencies from the before and after states of a part. This approach reduces the possible inversion parameters to only those that change in response to in-service loads and damage mechanisms. To evaluate the effectiveness of using a PTI inversion approach, creep strain and material properties were estimated in virtual and real samples using FEM inversion. Virtual and real dog bone samples composed of nickel-based superalloy Mar-M-247 were examined. Virtual samples were modeled with typically observed variations in material properties and dimensions. Creep modeling was verified with the collected resonance spectra from an incrementally crept physical sample. All samples were inverted against a model space that allowed for change in the creep damage state and the material properties but was blind to initial part dimensions. Results quantified the capabilities of PTI inversion in evaluating creep strain and material properties, as well as its sensitivity to confounding initial dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chefetz, B.; Yona Chen; Hadar, Y.
Composting of municipal solid waste (MSW) was studied in an attempt to elaborate transformations of organic matter (OM) during the process and define parameters for the degree of maturity of the product. Composting was performed in 1-m{sup 3} plastic boxes and the following parameters were measured in 13 samples during 132 d of composting: temperature, C/N ratio, ash content, humic substance contents, and fractions (humic acid, fulvic acid, and nonbumic fraction-HA, FA and NHF, respectively). Spectroscopic methods (CPMAS {sup 13}C-NMR, DRIFT) were used to study the chemical composition of the OM. A bioassay based on growth of cucumber (Cucumis satifusmore » L. cv. Dlila) plants was correlated to other parameters. The C/N ratio and ash content showed a typical high rate of change during the first 60 d and reached a plateau thereafter. The HA content increased to a maximum at 112 d, corresponding to the highest plant dry weight and highest 1650/1560 (cm{sup {minus}1}/cm{sup {minus}1}) peak ratios calculated from DRIFT spectra. {sup 13}C-NMR and DRIFT spectra of samples taken from the composting MSW during the process showed that the residual OM contained an increasing level of aromatic structures. Plant-growth bioassay, HA content, and the DRIFT spectra indicated that MSW compost described in this study, stabilized and achieved maturity after about 110 d. 31 refs., 8 figs., 2 tabs.« less
Thermal conductivity of electrospun polyethylene nanofibers.
Ma, Jian; Zhang, Qian; Mayo, Anthony; Ni, Zhonghua; Yi, Hong; Chen, Yunfei; Mu, Richard; Bellan, Leon M; Li, Deyu
2015-10-28
We report on the structure-thermal transport property relation of individual polyethylene nanofibers fabricated by electrospinning with different deposition parameters. Measurement results show that the nanofiber thermal conductivity depends on the electric field used in the electrospinning process, with a general trend of higher thermal conductivity for fibers prepared with stronger electric field. Nanofibers produced at a 45 kV electrospinning voltage and a 150 mm needle-collector distance could have a thermal conductivity of up to 9.3 W m(-1) K(-1), over 20 times higher than the typical bulk value. Micro-Raman characterization suggests that the enhanced thermal conductivity is due to the highly oriented polymer chains and enhanced crystallinity in the electrospun nanofibers.
NASA Astrophysics Data System (ADS)
Baker, G. N.
This paper examines the constraints upon a typical manufacturer of gyros and strapdown systems. It describes that while being responsive to exchange and keeping abreast of 'state of the art' technology, there are many reasons why the manufacturer must satisfy the market using existing technology and production equipment. The Single-Degree-of-Freedom Rate Integrating Gyro is a well established product, yet is capable of achieving far higher performances than originally envisaged due to modelling and characterization within digital strapdown systems. The parameters involved are discussed, and a description given of the calibration process undertaken on a strapdown system being manufactured in a production environment in batch quantities.
Strain-induced chiral magnetic effect in Weyl semimetals
Cortijo, Alberto; Kharzeev, Dmitri; Landsteiner, Karl; ...
2016-12-19
Here, we argue that strain applied to a time-reversal and inversion breaking Weyl semimetal in a magnetic field can induce an electric current via the chiral magnetic effect. A tight-binding model is used to show that strain generically changes the locations in the Brillouin zone but also the energies of the band touching points (tips of the Weyl cones). Since axial charge in a Weyl semimetal can relax via intervalley scattering processes, the induced current will decay with a time scale given by the lifetime of a chiral quasiparticle. Lastly, we estimate the strength and lifetime of the current formore » typical material parameters and find that it should be experimentally observable.« less
DABAM: an open-source database of X-ray mirrors metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele
2016-04-20
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
Reliable inference of light curve parameters in the presence of systematics
NASA Astrophysics Data System (ADS)
Gibson, Neale P.
2016-10-01
Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation. This results in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light-curve parameters conditioned on the subjective choice of systematics models and model-selection criteria. Here, I briefly review the use of systematics models commonly used for transmission and emission spectroscopy, including model selection, marginalisation over models, and stochastic processes. These form a hierarchy of models with increasing degree of objectivity. I argue that marginalisation over many systematics models is a minimal requirement for robust inference. Stochastic models provide even more flexibility and objectivity, and therefore produce the most reliable results. However, no systematics models are perfect, and the best strategy is to compare multiple methods and repeat observations where possible.
Modeling the Atmospheric Phase Effects of a Digital Antenna Array Communications System
NASA Technical Reports Server (NTRS)
Tkacenko, A.
2006-01-01
In an antenna array system such as that used in the Deep Space Network (DSN) for satellite communication, it is often necessary to account for the effects due to the atmosphere. Typically, the atmosphere induces amplitude and phase fluctuations on the transmitted downlink signal that invalidate the assumed stationarity of the signal model. The degree to which these perturbations affect the stationarity of the model depends both on parameters of the atmosphere, including wind speed and turbulence strength, and on parameters of the communication system, such as the sampling rate used. In this article, we focus on modeling the atmospheric phase fluctuations in a digital antenna array communications system. Based on a continuous-time statistical model for the atmospheric phase effects, we show how to obtain a related discrete-time model based on sampling the continuous-time process. The effects of the nonstationarity of the resulting signal model are investigated using the sample matrix inversion (SMI) algorithm for minimum mean-squared error (MMSE) equalization of the received signal
Estimating age at a specified length from the von Bertalanffy growth function
Ogle, Derek H.; Isermann, Daniel A.
2017-01-01
Estimating the time required (i.e., age) for fish in a population to reach a specific length (e.g., legal harvest length) is useful for understanding population dynamics and simulating the potential effects of length-based harvest regulations. The age at which a population reaches a specific mean length is typically estimated by fitting a von Bertalanffy growth function to length-at-age data and then rearranging the best-fit equation to solve for age at the specified length. This process precludes the use of standard frequentist methods to compute confidence intervals and compare estimates of age at the specified length among populations. We provide a parameterization of the von Bertalanffy growth function that has age at a specified length as a parameter. With this parameterization, age at a specified length is directly estimated, and standard methods can be used to construct confidence intervals and make among-group comparisons for this parameter. We demonstrate use of the new parameterization with two data sets.
DABAM: an open-source database of X-ray mirrors metrology
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; Glass, Mark; Idir, Mourad; Metz, Jim; Raimondi, Lorenzo; Rebuffi, Luca; Reininger, Ruben; Shi, Xianbo; Siewert, Frank; Spielmann-Jaeggi, Sibylle; Takacs, Peter; Tomasset, Muriel; Tonnessen, Tom; Vivo, Amparo; Yashchuk, Valeriy
2016-01-01
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database. PMID:27140145
Clayden, Jonathan D; Jentschke, Sebastian; Muñoz, Mónica; Cooper, Janine M; Chadwick, Martin J; Banks, Tina; Clark, Chris A; Vargha-Khadem, Faraneh
2012-08-01
The white matter of the brain undergoes a range of structural changes throughout development; from conception to birth, in infancy, and onwards through childhood and adolescence. Several studies have used diffusion magnetic resonance imaging (dMRI) to investigate these changes, but a consensus has not yet emerged on which white matter tracts undergo changes in the later stages of development or what the most important driving factors are behind these changes. In this study of typically developing 8- to 16-year-old children, we use a comprehensive data-driven approach based on principal components analysis to identify effects of age, gender, and brain volume on dMRI parameters, as well as their relative importance. We also show that secondary components of these parameters predict full-scale IQ, independently of the age- and gender-related effects. This overarching assessment of the common factors and gender differences in normal white matter tract development will help to advance understanding of this process in late childhood and adolescence.
Feature selection with harmony search.
Diao, Ren; Shen, Qiang
2012-12-01
Many search strategies have been exploited for the task of feature selection (FS), in an effort to identify more compact and better quality subsets. Such work typically involves the use of greedy hill climbing (HC), or nature-inspired heuristics, in order to discover the optimal solution without going through exhaustive search. In this paper, a novel FS approach based on harmony search (HS) is presented. It is a general approach that can be used in conjunction with many subset evaluation techniques. The simplicity of HS is exploited to reduce the overall complexity of the search process. The proposed approach is able to escape from local solutions and identify multiple solutions owing to the stochastic nature of HS. Additional parameter control schemes are introduced to reduce the effort and impact of parameter configuration. These can be further combined with the iterative refinement strategy, tailored to enforce the discovery of quality subsets. The resulting approach is compared with those that rely on HC, genetic algorithms, and particle swarm optimization, accompanied by in-depth studies of the suggested improvements.
Lessons learned in deploying software estimation technology and tools
NASA Technical Reports Server (NTRS)
Panlilio-Yap, Nikki; Ho, Danny
1994-01-01
Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.
Timescales and bottlenecks in miRNA-dependent gene regulation.
Hausser, Jean; Syed, Afzal Pasha; Selevsek, Nathalie; van Nimwegen, Erik; Jaskiewicz, Lukasz; Aebersold, Ruedi; Zavolan, Mihaela
2013-12-03
MiRNAs are post-transcriptional regulators that contribute to the establishment and maintenance of gene expression patterns. Although their biogenesis and decay appear to be under complex control, the implications of miRNA expression dynamics for the processes that they regulate are not well understood. We derived a mathematical model of miRNA-mediated gene regulation, inferred its parameters from experimental data sets, and found that the model describes well time-dependent changes in mRNA, protein and ribosome density levels measured upon miRNA transfection and induction. The inferred parameters indicate that the timescale of miRNA-dependent regulation is slower than initially thought. Delays in miRNA loading into Argonaute proteins and the slow decay of proteins relative to mRNAs can explain the typically small changes in protein levels observed upon miRNA transfection. For miRNAs to regulate protein expression on the timescale of a day, as miRNAs involved in cell-cycle regulation do, accelerated miRNA turnover is necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
DABAM: An open-source database of X-ray mirrors metrology
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; ...
2016-05-01
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. In conclusion, some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
Gain determination of optical active doped planar waveguides
NASA Astrophysics Data System (ADS)
Šmejcký, J.; Jeřábek, V.; Nekvindová, P.
2017-12-01
This paper summarizes the results of the gain transmission characteristics measurement carried out on the new ion exchange Ag+ - Na+ optical Er3+ and Yb3+ doped active planar waveguides realized on a silica based glass substrates. The results were used for optimization of the precursor concentration in the glass substrates. The gain measurements were performed by the time domain method using a pulse generator, as well as broadband measurement method using supercontinuum optical source in the wavelength domain. Both methods were compared and the results were graphically processed. It has been confirmed that pulse method is useful as it provides a very accurate measurement of the gain - pumping power characteristics for one wavelength. In the case of radiation spectral characteristics, our measurement exactly determined the maximum gain wavelength bandwidth of the active waveguide. The spectral characteristics of the pumped and unpumped waveguides were compared. The gain parameters of the reported silica-based glasses can be compared with the phosphate-based parameters, typically used for optical active devices application.
DABAM: an open-source database of X-ray mirrors metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
DABAM: An open-source database of X-ray mirrors metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. In conclusion, some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
NASA Astrophysics Data System (ADS)
Križan, Peter; Matúš, Miloš; Beniak, Juraj; Šooš, Ľubomír
2018-01-01
During the biomass densification can be recognized various technological variables and also material parameters which significantly influences the final solid biofuels (pellets) quality. In this paper, we will present the research findings concerning relationships between technological and material variables during densification of sunflower hulls. Sunflower hulls as an unused source is a typical product of agricultural industry in Slovakia and belongs to the group of herbaceous biomass. The main goal of presented experimental research is to determine the impact of compression pressure, compression temperature and material particle size distribution on final biofuels quality. Experimental research described in this paper was realized by single-axis densification, which was represented by experimental pressing stand. The impact of mentioned investigated variables on the final briquettes density and briquettes dilatation was determined. Mutual interactions of these variables on final briquettes quality are showing the importance of mentioned variables during the densification process. Impact of raw material particle size distribution on final biofuels quality was also proven by experimental research on semi-production pelleting plant.
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
NASA Astrophysics Data System (ADS)
Leemann, S. C.; Wurtz, W. A.
2018-03-01
The MAX IV 3 GeV storage ring is presently being commissioned and crucial parameters such as machine functions, emittance, and stored current have either already been reached or are approaching their design specifications. Once the baseline performance has been achieved, a campaign will be launched to further improve the brightness and coherence of this storage ring for typical X-ray users. During recent years, several such improvements have been designed. Common to these approaches is that they attempt to improve the storage ring performance using existing hardware provided for the baseline design. Such improvements therefore present more short-term upgrades. In this paper, however, we investigate medium-term improvements assuming power supplies can be exchanged in an attempt to push the brightness and coherence of the storage ring to the limit of what can be achieved without exchanging the magnetic lattice itself. We outline optics requirements, the optics optimization process, and summarize achievable parameters and expected performance.
NASA Astrophysics Data System (ADS)
Zhang, Mingkai; Liu, Yanchen; Cheng, Xun; Zhu, David Z.; Shi, Hanchang; Yuan, Zhiguo
2018-03-01
Quantifying rainfall-derived inflow and infiltration (RDII) in a sanitary sewer is difficult when RDII and overflow occur simultaneously. This study proposes a novel conductivity-based method for estimating RDII. The method separately decomposes rainfall-derived inflow (RDI) and rainfall-induced infiltration (RII) on the basis of conductivity data. Fast Fourier transform was adopted to analyze variations in the flow and water quality during dry weather. Nonlinear curve fitting based on the least squares algorithm was used to optimize parameters in the proposed RDII model. The method was successfully applied to real-life case studies, in which inflow and infiltration were successfully estimated for three typical rainfall events with total rainfall volumes of 6.25 mm (light), 28.15 mm (medium), and 178 mm (heavy). Uncertainties of model parameters were estimated using the generalized likelihood uncertainty estimation (GLUE) method and were found to be acceptable. Compared with traditional flow-based methods, the proposed approach exhibits distinct advantages in estimating RDII and overflow, particularly when the two processes happen simultaneously.
NASA Technical Reports Server (NTRS)
Collis, R. T. H.
1969-01-01
Lidar is an optical radar technique employing laser energy. Variations in signal intensity as a function of range provide information on atmospheric constituents, even when these are too tenuous to be normally visible. The theoretical and technical basis of the technique is described and typical values of the atmospheric optical parameters given. The significance of these parameters to atmospheric and meteorological problems is discussed. While the basic technique can provide valuable information about clouds and other material in the atmosphere, it is not possible to determine particle size and number concentrations precisely. There are also inherent difficulties in evaluating lidar observations. Nevertheless, lidar can provide much useful information as is shown by illustrations. These include lidar observations of: cirrus cloud, showing mountain wave motions; stratification in clear air due to the thermal profile near the ground; determinations of low cloud and visibility along an air-field approach path; and finally the motion and internal structure of clouds of tracer materials (insecticide spray and explosion-caused dust) which demonstrate the use of lidar for studying transport and diffusion processes.
NASA Astrophysics Data System (ADS)
Muguercia, Ivan
Hazardous radioactive liquid waste is the legacy of more than 50 years of plutonium production associated with the United States' nuclear weapons program. It is estimated that more than 245,000 tons of nitrate wastes are stored at facilities such as the single-shell tanks (SST) at the Hanford Site in the state of Washington, and the Melton Valley storage tanks at Oak Ridge National Laboratory (ORNL) in Tennessee. In order to develop an innovative, new technology for the destruction and immobilization of nitrate-based radioactive liquid waste, the United State Department of Energy (DOE) initiated the research project which resulted in the technology known as the Nitrate to Ammonia and Ceramic (NAC) process. However, inasmuch as the nitrate anion is highly mobile and difficult to immobilize, especially in relatively porous cement-based grout which has been used to date as a method for the immobilization of liquid waste, it presents a major obstacle to environmental clean-up initiatives. Thus, in an effort to contribute to the existing body of knowledge and enhance the efficacy of the NAC process, this research involved the experimental measurement of the rheological and heat transfer behaviors of the NAC product slurry and the determination of the optimal operating parameters for the continuous NAC chemical reaction process. Test results indicate that the NAC product slurry exhibits a typical non-Newtonian flow behavior. Correlation equations for the slurry's rheological properties and heat transfer rate in a pipe flow have been developed; these should prove valuable in the design of a full-scale NAC processing plant. The 20-percent slurry exhibited a typical dilatant (shear thickening) behavior and was in the turbulent flow regime due to its lower viscosity. The 40-percent slurry exhibited a typical pseudoplastic (shear thinning) behavior and remained in the laminar flow regime throughout its experimental range. The reactions were found to be more efficient in the lower temperature range investigated. With respect to leachability, the experimental final NAC ceramic waste form is comparable to the final product of vitrification, the technology chosen by DOE to treat these wastes. As the NAC process has the potential of reducing the volume of nitrate-based radioactive liquid waste by as much as 70 percent, it not only promises to enhance environmental remediation efforts but also effect substantial cost savings.
Holmes, William J; Darby, Richard AJ; Wilks, Martin DB; Smith, Rodney; Bill, Roslyn M
2009-01-01
Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time. PMID:19570229
Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio
2017-01-01
Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034
NASA Astrophysics Data System (ADS)
Simoni, Daniele; Lengani, Davide; Ubaldi, Marina; Zunino, Pietro; Dellacasagrande, Matteo
2017-06-01
The effects of free-stream turbulence intensity (FSTI) on the transition process of a pressure-induced laminar separation bubble have been studied for different Reynolds numbers (Re) by means of time-resolved (TR) PIV. Measurements have been performed along a flat plate installed within a double-contoured test section, designed to produce an adverse pressure gradient typical of ultra-high-lift turbine blade profiles. A test matrix spanning 3 FSTI levels and 3 Reynolds numbers has been considered allowing estimation of cross effects of these parameters on the instability mechanisms driving the separated flow transition process. Boundary layer integral parameters, spatial growth rate and saturation level of velocity fluctuations are discussed for the different cases in order to characterize the base flow response as well as the time-mean properties of the Kelvin-Helmholtz instability. The inspection of the instantaneous velocity vector maps highlights the dynamics of the large-scale structures shed near the bubble maximum displacement, as well as the low-frequency motion of the fore part of the separated shear layer. Proper Orthogonal Decomposition (POD) has been implemented to reduce the large amount of data for each condition allowing a rapid evaluation of the group velocity, spatial wavelength and dominant frequency of the vortex shedding process. The dimensionless shedding wave number parameter makes evident that the modification of the shear layer thickness at separation due to Reynolds number variation mainly drives the length scale of the rollup vortices, while higher FSTI levels force the onset of the shedding phenomenon to occur upstream due to the higher velocity fluctuations penetrating into the separating boundary layer.
Fast and non-destructive pore structure analysis using terahertz time-domain spectroscopy.
Markl, Daniel; Bawuah, Prince; Ridgway, Cathy; van den Ban, Sander; Goodwin, Daniel J; Ketolainen, Jarkko; Gane, Patrick; Peiponen, Kai-Erik; Zeitler, J Axel
2018-02-15
Pharmaceutical tablets are typically manufactured by the uni-axial compaction of powder that is confined radially by a rigid die. The directional nature of the compaction process yields not only anisotropic mechanical properties (e.g. tensile strength) but also directional properties of the pore structure in the porous compact. This study derives a new quantitative parameter, S a , to describe the anisotropy in pore structure of pharmaceutical tablets based on terahertz time-domain spectroscopy measurements. The S a parameter analysis was applied to three different data sets including tablets with only one excipient (functionalised calcium carbonate), samples with one excipient (microcrystalline cellulose) and one drug (indomethacin), and a complex formulation (granulated product comprising several excipients and one drug). The overall porosity, tablet thickness, initial particle size distribution as well as the granule density were all found to affect the significant structural anisotropies that were observed in all investigated tablets. The S a parameter provides new insights into the microstructure of a tablet and its potential was particularly demonstrated for the analysis of formulations comprising several components. The results clearly indicate that material attributes, such as particle size and granule density, cause a change of the pore structure, which, therefore, directly impacts the liquid imbibition that is part of the disintegration process. We show, for the first time, how the granule density impacts the pore structure, which will also affect the performance of the tablet. It is thus of great importance to gain a better understanding of the relationship of the physical properties of material attributes (e.g. intragranular porosity, particle shape), the compaction process and the microstructure of the finished product. Copyright © 2017 Elsevier B.V. All rights reserved.
CFD of mixing of multi-phase flow in a bioreactor using population balance model.
Sarkar, Jayati; Shekhawat, Lalita Kanwar; Loomba, Varun; Rathore, Anurag S
2016-05-01
Mixing in bioreactors is known to be crucial for achieving efficient mass and heat transfer, both of which thereby impact not only growth of cells but also product quality. In a typical bioreactor, the rate of transport of oxygen from air is the limiting factor. While higher impeller speeds can enhance mixing, they can also cause severe cell damage. Hence, it is crucial to understand the hydrodynamics in a bioreactor to achieve optimal performance. This article presents a novel approach involving use of computational fluid dynamics (CFD) to model the hydrodynamics of an aerated stirred bioreactor for production of a monoclonal antibody therapeutic via mammalian cell culture. This is achieved by estimating the volume averaged mass transfer coefficient (kL a) under varying conditions of the process parameters. The process parameters that have been examined include the impeller rotational speed and the flow rate of the incoming gas through the sparger inlet. To undermine the two-phase flow and turbulence, an Eulerian-Eulerian multiphase model and k-ε turbulence model have been used, respectively. These have further been coupled with population balance model to incorporate the various interphase interactions that lead to coalescence and breakage of bubbles. We have successfully demonstrated the utility of CFD as a tool to predict size distribution of bubbles as a function of process parameters and an efficient approach for obtaining optimized mixing conditions in the reactor. The proposed approach is significantly time and resource efficient when compared to the hit and trial, all experimental approach that is presently used. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:613-628, 2016. © 2016 American Institute of Chemical Engineers.
High-Level Performance Modeling of SAR Systems
NASA Technical Reports Server (NTRS)
Chen, Curtis
2006-01-01
SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.
Detection of damage in welded structure using experimental modal data
NASA Astrophysics Data System (ADS)
Abu Husain, N.; Ouyang, H.
2011-07-01
A typical automotive structure could contain thousands of spot weld joints that contribute significantly to the vehicle's structural stiffness and dynamic characteristics. However, some of these joints may be imperfect or even absent during the manufacturing process and they are also highly susceptible to damage due to operational and environmental conditions during the vehicle lifetime. Therefore, early detection and estimation of damage are important so necessary actions can be taken to avoid further problems. Changes in physical parameters due to existence of damage in a structure often leads to alteration of vibration modes; thus demonstrating the dependency between the vibration characteristics and the physical properties of structures. A sensitivity-based model updating method, performed using a combination of MATLAB and NASTRAN, has been selected for the purpose of this work. The updating procedure is regarded as parameter identification which aims to bring the numerical prediction to be as closely as possible to the measured natural frequencies and mode shapes data of the damaged structure in order to identify the damage parameters (characterised by the reductions in the Young's modulus of the weld patches to indicate the loss of material/stiffness at the damage region).
DIRT: The Dust InfraRed Toolbox
NASA Astrophysics Data System (ADS)
Pound, M. W.; Wolfire, M. G.; Mundy, L. G.; Teuben, P. J.; Lord, S.
We present DIRT, a Java applet geared toward modeling a variety of processes in envelopes of young and evolved stars. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. The computing cluster for the database is described in the accompanying paper by Teuben et al. (2000). A typical user query will return about 50-100 models, which the user can then interactively filter as a function of 8 model parameters (e.g., extinction, size, flux, luminosity). A flexible, multi-dimensional plotter (Figure 1) allows users to view the models, rotate them, tag specific parameters with color or symbol size, and probe individual model points. For any given model, auxiliary plots such as dust grain properties, radial intensity profiles, and the flux as a function of wavelength and beamsize can be viewed. The user can fit observed data to several models simultaneously and see the results of the fit; the best fit is automatically selected for plotting. The URL for this project is http://dustem.astro.umd.edu.
NASA Astrophysics Data System (ADS)
Olivares, Irene; Angelova, Todora I.; Pinilla-Cienfuegos, Elena; Sanchis, Pablo
2016-05-01
The electro-optic Pockels effect may be generated in silicon photonics structures by breaking the crystal symmetry by means of a highly stressing cladding layer (typically silicon nitride, SiN) deposited on top of the silicon waveguide. In this work, the influence of the waveguide parameters on the strain distribution and its overlap with the optical mode to enhance the Pockels effect has been analyzed. The optimum waveguide structure have been designed based on the definition and quantification of a figure of merit. The fabrication of highly stressing SiN layers by PECVD has also been optimized to characterize the designed structures. The residual stress has been controlled during the growth process by analyzing the influence of the main deposition parameters. Therefore, two identical samples with low and high stress conditions were fabricated and electro-optically characterized to test the induced Pockels effect and the influence of carrier effects. Electro-optical modulation was only measured in the sample with the high stressing SiN layer that could be attributed to the Pockels effect. Nevertheless, the influence of carriers were also observed thus making necessary additional experiments to decouple both effects.
Compression for an effective management of telemetry data
NASA Technical Reports Server (NTRS)
Arcangeli, J.-P.; Crochemore, M.; Hourcastagnou, J.-N.; Pin, J.-E.
1993-01-01
A Technological DataBase (T.D.B.) records all the values taken by the physical on-board parameters of a satellite since launch time. The amount of temporal data is very large (about 15 Gbytes for the satellite TDF1) and an efficient system must allow users to have a fast access to any value. This paper presents a new solution for T.D.B. management. The main feature of our new approach is the use of lossless data compression methods. Several parametrizable data compression algorithms based on substitution, relative difference and run-length encoding are available. Each of them is dedicated to a specific type of variation of the parameters' values. For each parameter, an analysis of stability is performed at decommutation time, and then the best method is chosen and run. A prototype intended to process different sorts of satellites has been developed. Its performances are well beyond the requirements and prove that data compression is both time and space efficient. For instance, the amount of data for TDF1 has been reduced to 1.05 Gbytes (compression ratio is 1/13) and access time for a typical query has been reduced from 975 seconds to 14 seconds.
Coupling SPH and thermochemical models of planets: Methodology and example of a Mars-sized body
NASA Astrophysics Data System (ADS)
Golabek, G. J.; Emsenhuber, A.; Jutzi, M.; Asphaug, E. I.; Gerya, T. V.
2018-02-01
Giant impacts have been suggested to explain various characteristics of terrestrial planets and their moons. However, so far in most models only the immediate effects of the collisions have been considered, while the long-term interior evolution of the impacted planets was not studied. Here we present a new approach, combining 3-D shock physics collision calculations with 3-D thermochemical interior evolution models. We apply the combined methods to a demonstration example of a giant impact on a Mars-sized body, using typical collisional parameters from previous studies. While the material parameters (equation of state, rheology model) used in the impact simulations can have some effect on the long-term evolution, we find that the impact angle is the most crucial parameter for the resulting spatial distribution of the newly formed crust. The results indicate that a dichotomous crustal pattern can form after a head-on collision, while this is not the case when considering a more likely grazing collision. Our results underline that end-to-end 3-D calculations of the entire process are required to study in the future the effects of large-scale impacts on the evolution of planetary interiors.
Laser hybrid joining of plastic and metal components for lightweight components
NASA Astrophysics Data System (ADS)
Rauschenberger, J.; Cenigaonaindia, A.; Keseberg, J.; Vogler, D.; Gubler, U.; Liébana, F.
2015-03-01
Plastic-metal hybrids are replacing all-metal structures in the automotive, aerospace and other industries at an accelerated rate. The trend towards lightweight construction increasingly demands the usage of polymer components in drive trains, car bodies, gaskets and other applications. However, laser joining of polymers to metals presents significantly greater challenges compared with standard welding processes. We present recent advances in laser hybrid joining processes. Firstly, several metal pre-structuring methods, including selective laser melting (SLM) are characterized and their ability to provide undercut structures in the metal assessed. Secondly, process parameter ranges for hybrid joining of a number of metals (steel, stainless steel, etc.) and polymers (MABS, PA6.6-GF35, PC, PP) are given. Both transmission and direct laser joining processes are presented. Optical heads and clamping devices specifically tailored to the hybrid joining process are introduced. Extensive lap-shear test results are shown that demonstrate that joint strengths exceeding the base material strength (cohesive failure) can be reached with metal-polymer joining. Weathering test series prove that such joints are able to withstand environmental influences typical in targeted fields of application. The obtained results pave the way toward implementing metalpolymer joints in manufacturing processes.
NASA Astrophysics Data System (ADS)
Ge, Lichao; Feng, Hongcui; Xu, Chang; Zhang, Yanwei; Wang, Zhihua
2018-02-01
This study investigates the influence of microwave irradiation on coal composition, pore structure, coal rank, and combustion characteristics of typical brown coals in China. Results show that the upgrading process significantly decreased the inherent moisture, and increased calorific value and fixed carbon content. After upgrading, pore distribution extended to micropore region, oxygen functional groups were reduced and destroyed, and the apparent aromaticity increased suggesting an improvement in the coal rank. Based on thermogravimetric analysis, the combustion processes of upgraded coals were delayed toward the high temperature region, and the temperatures of ignition, peak and burnout increased. Based on the average combustion rate and comprehensive combustion parameter, the upgraded coals performed better compared with raw brown coals and a high rank coal. In ignition and burnout segments, the activation energy increased but exhibited a decrease in the combustion stage.
Cappannella, Elena; Benucci, Ilaria; Lombardelli, Claudio; Liburdi, Katia; Bavaro, Teodora; Esti, Marco
2016-11-01
Lysozyme from hen egg white (HEWL) was covalently immobilized on spherical supports based on microbial chitosan in order to develop a system for the continuous, efficient and food-grade enzymatic lysis of lactic bacteria (Oenococcus oeni) in white and red wine. The objective is to limit the sulfur dioxide dosage required to control malolactic fermentation, via a cell concentration typical during this process. The immobilization procedure was optimized in batch mode, evaluating the enzyme loading, the specific activity, and the kinetic parameters in model wine. Subsequently, a bench-scale fluidized-bed reactor was developed, applying the optimized process conditions. HEWL appeared more effective in the immobilized form than in the free one, when the reactor was applied in real white and red wine. This preliminary study suggests that covalent immobilization renders the enzyme less sensitive to the inhibitory effect of wine flavans. Copyright © 2016 Elsevier Ltd. All rights reserved.
Garcia-Allende, P Beatriz; Mirapeix, Jesus; Conde, Olga M; Cobo, Adolfo; Lopez-Higuera, Jose M
2009-01-01
Plasma optical spectroscopy is widely employed in on-line welding diagnostics. The determination of the plasma electron temperature, which is typically selected as the output monitoring parameter, implies the identification of the atomic emission lines. As a consequence, additional processing stages are required with a direct impact on the real time performance of the technique. The line-to-continuum method is a feasible alternative spectroscopic approach and it is particularly interesting in terms of its computational efficiency. However, the monitoring signal highly depends on the chosen emission line. In this paper, a feature selection methodology is proposed to solve the uncertainty regarding the selection of the optimum spectral band, which allows the employment of the line-to-continuum method for on-line welding diagnostics. Field test results have been conducted to demonstrate the feasibility of the solution.
Development of a nondestructive evaluation method for FRP bridge decks
NASA Astrophysics Data System (ADS)
Brown, Jeff; Fox, Terra
2010-05-01
Open steel grids are typically used on bridges to minimize the weight of the bridge deck and wearing surface. These grids, however, require frequent maintenance and exhibit other durability concerns related to fatigue cracking and corrosion. Bridge decks constructed from composite materials, such as a Fiber-reinforced Polymer (FRP), are strong and lightweight; they also offer improved rideability, reduced noise levels, less maintenance, and are relatively easy to install compared to steel grids. This research is aimed at developing an inspection protocol for FRP bridge decks using Infrared thermography. The finite element method was used to simulate the heat transfer process and determine optimal heating and data acquisition parameters that will be used to inspect FRP bridge decks in the field. It was demonstrated that thermal imaging could successfully identify features of the FRP bridge deck to depths of 1.7 cm using a phase analysis process.
Experimental Study on the Axis Line Deflection of Ti6A14V Titanium Alloy in Gun-Drilling Process
NASA Astrophysics Data System (ADS)
Li, Liang; Xue, Hu; Wu, Peng
2018-01-01
Titanium alloy is widely used in aerospace industry, but it is also a typical difficult-to-cut material. During Deep hole drilling of the shaft parts of a certain large aircraft, there are problems of bad surface roughness, chip control and axis deviation, so experiments on gun-drilling of Ti6A14V titanium alloy were carried out to measure the axis line deflection, diameter error and surface integrity, and the reasons of these errors were analyzed. Then, the optimized process parameter was obtained during gun-drilling of Ti6A14V titanium alloy with deep hole diameter of 17mm. Finally, we finished the deep hole drilling of 860mm while the comprehensive error is smaller than 0.2mm and the surface roughness is less than 1.6μm.
Development of Detonation Flame Sprayed Cu-Base Coatings Containing Large Ceramic Particles
NASA Astrophysics Data System (ADS)
Tillmann, Wolfgang; Vogli, Evelina; Nebel, Jan
2007-12-01
Metal-matrix composites (MMCs) containing large ceramic particles as superabrasives are typically used for grinding stone, minerals, and concrete. Sintering and brazing are the key manufacturing technologies for grinding tool production. However, restricted geometry flexibility and the absence of repair possibilities for damaged tool surfaces, as well as difficulties of controlling material interfaces, are the main weaknesses of these production processes. Thermal spraying offers the possibility to avoid these restrictions. The research for this paper investigated a fabrication method based on the use of detonation flame spraying technology to bond large superabrasive particles (150-600 μm, needed for grinding minerals and stones) in a metallic matrix. Layer morphology and bonding quality are evaluated with respect to superabrasive material, geometry, spraying, and powder-injection parameters. The influence of process temperature and the possibilities of thermal treatment of MMC layers are analyzed.
NASA Technical Reports Server (NTRS)
Miller, W. S.
1974-01-01
The cryogenic refrigerator thermal design calculations establish design approach and basic sizing of the machine's elements. After the basic design is defined, effort concentrates on matching the thermodynamic design with that of the heat transfer devices (heat exchangers and regenerators). Typically, the heat transfer device configurations and volumes are adjusted to improve their heat transfer and pressure drop characteristics. These adjustments imply that changes be made to the active displaced volumes, compensating for the influence of the heat transfer devices on the thermodynamic processes of the working fluid. Then, once the active volumes are changed, the heat transfer devices require adjustment to account for the variations in flows, pressure levels, and heat loads. This iterative process is continued until the thermodynamic cycle parameters match the design of the heat transfer devices. By examing several matched designs, a near-optimum refrigerator is selected.
Classical simulation of quantum error correction in a Fibonacci anyon code
NASA Astrophysics Data System (ADS)
Burton, Simon; Brell, Courtney G.; Flammia, Steven T.
2017-02-01
Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.
PI controller design for indirect vector controlled induction motor: A decoupling approach.
Jain, Jitendra Kr; Ghosh, Sandip; Maity, Somnath; Dworak, Pawel
2017-09-01
Decoupling of the stator currents is important for smoother torque response of indirect vector controlled induction motors. Typically, feedforward decoupling is used to take care of current coupling that requires exact knowledge of motor parameters, additional circuitry and signal processing. In this paper, a method is proposed to design the regulating proportional-integral gains that minimize coupling without any requirement of the additional decoupler. The variation of the coupling terms for change in load torque is considered as the performance measure. An iterative linear matrix inequality based H ∞ control design approach is used to obtain the controller gains. A comparison between the feedforward and the proposed decoupling schemes is presented through simulation and experimental results. The results show that the proposed scheme is simple yet effective even without additional block or burden on signal processing. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Toward an Astrophysical Theory of Chondrites
NASA Technical Reports Server (NTRS)
Shang, Hsien; Shu, Frank H.; Lee, Typhoon
1996-01-01
Sunlike stars are born with disks. Based on our recently developed model to understand how a magnetized new star interacts with its surrounding accretion disk, we advanced an astrophysical theory for the early solar system. The aerodynamic drag of a magnetocentrifugally driven wind out of the inner edge of a shaded disk could expose solid bodies lifted into the heat of direct sunlight, when material is still accreting onto the protosun. Chondrules, calcium-aluminum-rich inclusions (CAI's), and rims could form along the flight for typical self-consistent parameters of the outflow in different stages of star formation. The process gives a natural sorting mechanism that explains the size distribution of CAI's and chondrules, as well as their associated rims. Chondritic bodies then subsequently form by compaction of the processed solids with the ambient nebular dust comprising the matrices after their reentry at great distances from the original launch radius.