Operant Variability: Procedures and Processes
ERIC Educational Resources Information Center
Machado, Armando; Tonneau, Francois
2012-01-01
Barba's (2012) article deftly weaves three main themes in one argument about operant variability. From general theoretical considerations on operant behavior (Catania, 1973), Barba derives methodological guidelines about response differentiation and applies them to the study of operant variability. In the process, he uncovers unnoticed features of…
Brooks, Robin; Thorpe, Richard; Wilson, John
2004-11-11
A new mathematical treatment of alarms that considers them as multi-variable interactions between process variables has provided the first-ever method to calculate values for alarm limits. This has resulted in substantial reductions in false alarms and hence in alarm annunciation rates in field trials. It has also unified alarm management, process control and product quality control into a single mathematical framework so that operations improvement and hence economic benefits are obtained at the same time as increased process safety. Additionally, an algorithm has been developed that advises what changes should be made to Manipulable process variables to clear an alarm. The multi-variable Best Operating Zone at the heart of the method is derived from existing historical data using equation-free methods. It does not require a first-principles process model or an expensive series of process identification experiments. Integral with the method is a new format Process Operator Display that uses only existing variables to fully describe the multi-variable operating space. This combination of features makes it an affordable and maintainable solution for small plants and single items of equipment as well as for the largest plants. In many cases, it also provides the justification for the investments about to be made or already made in process historian systems. Field Trials have been and are being conducted at IneosChlor and Mallinckrodt Chemicals, both in the UK, of the new geometric process control (GPC) method for improving the quality of both process operations and product by providing Process Alarms and Alerts of much high quality than ever before. The paper describes the methods used, including a simple visual method for Alarm Rationalisation that quickly delivers large sets of Consistent Alarm Limits, and the extension to full Alert Management with highlights from the Field Trials to indicate the overall effectiveness of the method in practice.
Advanced multivariable control of a turboexpander plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altena, D.; Howard, M.; Bullin, K.
1998-12-31
This paper describes an application of advanced multivariable control on a natural gas plant and compares its performance to the previous conventional feed-back control. This control algorithm utilizes simple models from existing plant data and/or plant tests to hold the process at the desired operating point in the presence of disturbances and changes in operating conditions. The control software is able to accomplish this due to effective handling of process variable interaction, constraint avoidance and feed-forward of measured disturbances. The economic benefit of improved control lies in operating closer to the process constraints while avoiding significant violations. The South Texasmore » facility where this controller was implemented experienced reduced variability in process conditions which increased liquids recovery because the plant was able to operate much closer to the customer specified impurity constraint. An additional benefit of this implementation of multivariable control is the ability to set performance criteria beyond simple setpoints, including process variable constraints, relative variable merit and optimizing use of manipulated variables. The paper also details the control scheme applied to the complex turboexpander process and some of the safety features included to improve reliability.« less
Preparation of Effective Operating Manuals to Support Waste Management Plant Operator Training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S. R.
2003-02-25
Effective plant operating manuals used in a formal training program can make the difference between a successful operation and a failure. Once the plant process design and control strategies have been fixed, equipment has been ordered, and the plant is constructed, the only major variable affecting success is the capability of plant operating personnel. It is essential that the myriad details concerning plant operation are documented in comprehensive operating manuals suitable for training the non-technical personnel that will operate the plant. These manuals must cover the fundamental principles of each unit operation including how each operates, what process variables aremore » important, and the impact of each variable on the overall process. In addition, operators must know the process control strategies, process interlocks, how to respond to alarms, each of the detailed procedures required to start up and optimize the plant, and every control loop-including when it is appropriate to take manual control. More than anything else, operating mistakes during the start-up phase can lead to substantial delays in achieving design processing rates as well as to problems with government authorities if environmental permit limits are exceeded. The only way to assure return on plant investment is to ensure plant operators have the knowledge to properly run the plant from the outset. A comprehensive set of operating manuals specifically targeted toward plant operators and supervisors written by experienced operating personnel is the only effective way to provide the necessary information for formal start-up training.« less
Operant Variability: A Conceptual Analysis
ERIC Educational Resources Information Center
Barba, Lourenco de Souza
2012-01-01
Some researchers claim that variability is an operant dimension of behavior. The present paper reviews the concept of operant behavior and emphasizes that differentiation is the behavioral process that demonstrates an operant relation. Differentiation is conceived as change in the overlap between two probability distributions: the distribution of…
Study of process variables associated with manufacturing hermetically-sealed nickel-cadmium cells
NASA Technical Reports Server (NTRS)
Miller, L.
1974-01-01
A two year study of the major process variables associated with the manufacturing process for sealed, nickel-cadmium, areospace cells is summarized. Effort was directed toward identifying the major process variables associated with a manufacturing process, experimentally assessing each variable's effect, and imposing the necessary changes (optimization) and controls for the critical process variables to improve results and uniformity. A critical process variable associated with the sintered nickel plaque manufacturing process was identified as the manual forming operation. Critical process variables identified with the positive electrode impregnation/polarization process were impregnation solution temperature, free acid content, vacuum impregnation, and sintered plaque strength. Positive and negative electrodes were identified as a major source of carbonate contamination in sealed cells.
Quality of narrative operative reports in pancreatic surgery
Wiebe, Meagan E.; Sandhu, Lakhbir; Takata, Julie L.; Kennedy, Erin D.; Baxter, Nancy N.; Gagliardi, Anna R.; Urbach, David R.; Wei, Alice C.
2013-01-01
Background Quality in health care can be evaluated using quality indicators (QIs). Elements contained in the surgical operative report are potential sources for QI data, but little is known about the completeness of the narrative operative report (NR). We evaluated the completeness of the NR for patients undergoing a pancreaticoduodenectomy. Methods We reviewed NRs for patients undergoing a pancreaticoduodenectomy over a 1-year period. We extracted 79 variables related to patient and narrator characteristics, process of care measures, surgical technique and oncology-related outcomes by document analysis. Data were coded and evaluated for completeness. Results We analyzed 74 NRs. The median number of variables reported was 43.5 (range 13–54). Variables related to surgical technique were most complete. Process of care and oncology-related variables were often omitted. Completeness of the NR was associated with longer operative duration. Conclusion The NRs were often incomplete and of poor quality. Important elements, including process of care and oncology-related data, were frequently missing. Thus, the NR is an inadequate data source for QI. Development and use of alternative reporting methods, including standardized synoptic operative reports, should be encouraged to improve documentation of care and serve as a measure of quality of surgical care. PMID:24067527
Quality of narrative operative reports in pancreatic surgery.
Wiebe, Meagan E; Sandhu, Lakhbir; Takata, Julie L; Kennedy, Erin D; Baxter, Nancy N; Gagliardi, Anna R; Urbach, David R; Wei, Alice C
2013-10-01
Quality in health care can be evaluated using quality indicators (QIs). Elements contained in the surgical operative report are potential sources for QI data, but little is known about the completeness of the narrative operative report (NR). We evaluated the completeness of the NR for patients undergoing a pancreaticoduodenectomy. We reviewed NRs for patients undergoing a pancreaticoduodenectomy over a 1-year period. We extracted 79 variables related to patient and narrator characteristics, process of care measures, surgical technique and oncology-related outcomes by document analysis. Data were coded and evaluated for completeness. We analyzed 74 NRs. The median number of variables reported was 43.5 (range 13-54). Variables related to surgical technique were most complete. Process of care and oncology-related variables were often omitted. Completeness of the NR was associated with longer operative duration. The NRs were often incomplete and of poor quality. Important elements, including process of care and oncology-related data, were frequently missing. Thus, the NR is an inadequate data source for QI. Development and use of alternative reporting methods, including standardized synoptic operative reports, should be encouraged to improve documentation of care and serve as a measure of quality of surgical care.
Glosser, D.; Kutchko, B.; Benge, G.; ...
2016-03-21
Foamed cement is a critical component for wellbore stability. The mechanical performance of a foamed cement depends on its microstructure, which in turn depends on the preparation method and attendant operational variables. Determination of cement stability for field use is based on laboratory testing protocols governed by API Recommended Practice 10B-4 (API RP 10B-4, 2015). However, laboratory and field operational variables contrast considerably in terms of scale, as well as slurry mixing and foaming processes. Here in this paper, laboratory and field operational processes are characterized within a physics-based framework. It is shown that the “atomization energy” imparted by themore » high pressure injection of nitrogen gas into the field mixed foamed cement slurry is – by a significant margin – the highest energy process, and has a major impact on the void system in the cement slurry. There is no analog for this high energy exchange in current laboratory cement preparation and testing protocols. Quantifying the energy exchanges across the laboratory and field processes provides a basis for understanding relative impacts of these variables on cement structure, and can ultimately lead to the development of practices to improve cement testing and performance.« less
NASA Astrophysics Data System (ADS)
Ruiz-Cárcel, C.; Jaramillo, V. H.; Mba, D.; Ottewill, J. R.; Cao, Y.
2016-01-01
The detection and diagnosis of faults in industrial processes is a very active field of research due to the reduction in maintenance costs achieved by the implementation of process monitoring algorithms such as Principal Component Analysis, Partial Least Squares or more recently Canonical Variate Analysis (CVA). Typically the condition of rotating machinery is monitored separately using vibration analysis or other specific techniques. Conventional vibration-based condition monitoring techniques are based on the tracking of key features observed in the measured signal. Typically steady-state loading conditions are required to ensure consistency between measurements. In this paper, a technique based on merging process and vibration data is proposed with the objective of improving the detection of mechanical faults in industrial systems working under variable operating conditions. The capabilities of CVA for detection and diagnosis of faults were tested using experimental data acquired from a compressor test rig where different process faults were introduced. Results suggest that the combination of process and vibration data can effectively improve the detectability of mechanical faults in systems working under variable operating conditions.
What carries a mediation process? Configural analysis of mediation.
von Eye, Alexander; Mun, Eun Young; Mair, Patrick
2009-09-01
Mediation is a process that links a predictor and a criterion via a mediator variable. Mediation can be full or partial. This well-established definition operates at the level of variables even if they are categorical. In this article, two new approaches to the analysis of mediation are proposed. Both of these approaches focus on the analysis of categorical variables. The first involves mediation analysis at the level of configurations instead of variables. Thus, mediation can be incorporated into the arsenal of methods of analysis for person-oriented research. Second, it is proposed that Configural Frequency Analysis (CFA) can be used for both exploration and confirmation of mediation relationships among categorical variables. The implications of using CFA are first that mediation hypotheses can be tested at the level of individual configurations instead of variables. Second, this approach leaves the door open for different types of mediation processes to exist within the same set. Using a data example, it is illustrated that aggregate-level analysis can overlook mediation processes that operate at the level of individual configurations.
EXAMINING THE TEMPORAL VARIABILITY OF AMMONIA AND NITRIC OXIDE EMISSIONS FROM AGRICULTURAL PROCESSES
This paper examines the temporal variability of airborne emissions of ammonia from livestock operations and fertilizer application and nitric oxide from soils. In the United States, the livestock operations and fertilizer categories comprise the majority of the ammonia emissions...
Group interaction and flight crew performance
NASA Technical Reports Server (NTRS)
Foushee, H. Clayton; Helmreich, Robert L.
1988-01-01
The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.
Jurick, Sarah M; Crocker, Laura D; Sanderson-Cimino, Mark; Keller, Amber V; Trenova, Liljana S; Boyd, Briana L; Twamley, Elizabeth W; Rodgers, Carie S; Schiehser, Dawn M; Aupperle, Robin L; Jak, Amy J
Posttraumatic stress disorder (PTSD), history of mild traumatic brain injury (mTBI), and executive function (EF) difficulties are prevalent in Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) Veterans. We evaluated the contributions of injury variables, lower-order cognitive component processes (processing speed/attention), and psychological symptoms to EF. OEF/OIF Veterans (N = 65) with PTSD and history of mTBI were administered neuropsychological tests of EF and self-report assessments of PTSD and depression. Those impaired on one or more EF measures had higher PTSD and depression symptoms and lower processing speed/attention performance than those with intact performance on all EF measures. Across participants, poorer attention/processing speed performance and higher psychological symptoms were associated with worse performance on specific aspects of EF (eg, inhibition and switching) even after accounting for injury variables. Although direct relationships between EF and injury variables were equivocal, there was an interaction between measures of injury burden and processing speed/attention such that those with greater injury burden exhibited significant and positive relationships between processing speed/attention and inhibition/switching, whereas those with lower injury burden did not. Psychological symptoms as well as lower-order component processes of EF (attention and processing speed) contribute significantly to executive dysfunction in OEF/OIF Veterans with PTSD and history of mTBI. However, there may be equivocal relationships between injury variables and EF that warrant further study. Results provide groundwork for more fully understanding cognitive symptoms in OEF/OIF Veterans with PTSD and history of mTBI that can inform psychological and cognitive interventions in this population.
Penloglou, Giannis; Chatzidoukas, Christos; Kiparissides, Costas
2012-01-01
The microbial production of polyhydroxybutyrate (PHB) is a complex process in which the final quantity and quality of the PHB depend on a large number of process operating variables. Consequently, the design and optimal dynamic operation of a microbial process for the efficient production of PHB with tailor-made molecular properties is an extremely interesting problem. The present study investigates how key process operating variables (i.e., nutritional and aeration conditions) affect the biomass production rate and the PHB accumulation in the cells and its associated molecular weight distribution. A combined metabolic/polymerization/macroscopic modelling approach, relating the process performance and product quality with the process variables, was developed and validated using an extensive series of experiments and measurements. The model predicts the dynamic evolution of the biomass growth, the polymer accumulation, the consumption of carbon and nitrogen sources and the average molecular weights of the PHB in a bioreactor, under batch and fed-batch operating conditions. The proposed integrated model was used for the model-based optimization of the production of PHB with tailor-made molecular properties in Azohydromonas lata bacteria. The process optimization led to a high intracellular PHB accumulation (up to 95% g of PHB per g of DCW) and the production of different grades (i.e., different molecular weight distributions) of PHB. Copyright © 2011 Elsevier Inc. All rights reserved.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Knob manager (KM) operators guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1993-10-08
KM, Knob Manager, is a tool which enables the user to use the SUNDIALS knob box to adjust the settings of the control system. The followings are some features of KM: dynamic knob assignments with the user friendly interface; user-defined gain for individual knob; graphical displays for operating range and status of each process variable is assigned; backup and restore one or multiple process variable; save current settings to a file and recall the settings from that file in future.
Jiang, Canping; Flansburg, Lisa; Ghose, Sanchayita; Jorjorian, Paul; Shukla, Abhinav A
2010-12-15
The concept of design space has been taking root under the quality by design paradigm as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. This paper outlines the development of a design space for a hydrophobic interaction chromatography (HIC) process step. The design space included the impact of raw material lot-to-lot variability and variations in the feed stream from cell culture. A failure modes and effects analysis was employed as the basis for the process characterization exercise. During mapping of the process design space, the multi-dimensional combination of operational variables were studied to quantify the impact on process performance in terms of yield and product quality. Variability in resin hydrophobicity was found to have a significant influence on step yield and high-molecular weight aggregate clearance through the HIC step. A robust operating window was identified for this process step that enabled a higher step yield while ensuring acceptable product quality. © 2010 Wiley Periodicals, Inc.
Schoellhamer, D.H.
2002-01-01
Suspended sediment concentration (SSC) data from San Pablo Bay, California, were analyzed to compare the basin-scale effect of dredging and disposal of dredged material (dredging operations) and natural estuarine processes. The analysis used twelve 3-wk to 5-wk periods of mid-depth and near-bottom SSC data collected at Point San Pablo every 15 min from 1993-1998. Point San Pablo is within a tidal excursion of a dredged-material disposal site. The SSC data were compared to dredging volume, Julian day, and hydrodynamic and meteorological variables that could affect SSC. Kendall's ??, Spearman's ??, and weighted (by the fraction of valid data in each period) Spearman's ??w correlation coefficients of the variables indicated which variables were significantly correlated with SSC. Wind-wave resuspension had the greatest effect on SSC. Median water-surface elevation was the primary factor affecting mid-depth SSC. Greater depths inhibit wind-wave resuspension of bottom sediment and indicate greater influence of less turbid water from down estuary. Seasonal variability in the supply of erodible sediment is the primary factor affecting near-bottom SSC. Natural physical processes in San Pablo Bay are more areally extensive, of equal or longer duration, and as frequent as dredging operations (when occurring), and they affect SSC at the tidal time scale. Natural processes control SSC at Point San Pablo even when dredging operations are occurring.
Dynamic Quantum Allocation and Swap-Time Variability in Time-Sharing Operating Systems.
ERIC Educational Resources Information Center
Bhat, U. Narayan; Nance, Richard E.
The effects of dynamic quantum allocation and swap-time variability on central processing unit (CPU) behavior are investigated using a model that allows both quantum length and swap-time to be state-dependent random variables. Effective CPU utilization is defined to be the proportion of a CPU busy period that is devoted to program processing, i.e.…
Process mapping as a framework for performance improvement in emergency general surgery.
DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad
2017-12-01
Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.
Process mapping as a framework for performance improvement in emergency general surgery.
DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad
2018-02-01
Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.
Nguyen, Dinh Duc; Yoon, Yong Soo; Bui, Xuan Thanh; Kim, Sung Su; Chang, Soon Woong; Guo, Wenshan; Ngo, Huu Hao
2017-11-01
Performance of an electrocoagulation (EC) process in batch and continuous operating modes was thoroughly investigated and evaluated for enhancing wastewater phosphorus removal under various operating conditions, individually or combined with initial phosphorus concentration, wastewater conductivity, current density, and electrolysis times. The results revealed excellent phosphorus removal (72.7-100%) for both processes within 3-6 min of electrolysis, with relatively low energy requirements, i.e., less than 0.5 kWh/m 3 for treated wastewater. However, the removal efficiency of phosphorus in the continuous EC operation mode was better than that in batch mode within the scope of the study. Additionally, the rate and efficiency of phosphorus removal strongly depended on operational parameters, including wastewater conductivity, initial phosphorus concentration, current density, and electrolysis time. Based on experimental data, statistical model verification of the response surface methodology (RSM) (multiple factor optimization) was also established to provide further insights and accurately describe the interactive relationship between the process variables, thus optimizing the EC process performance. The EC process using iron electrodes is promising for improving wastewater phosphorus removal efficiency, and RSM can be a sustainable tool for predicting the performance of the EC process and explaining the influence of the process variables.
A new fractional operator of variable order: Application in the description of anomalous diffusion
NASA Astrophysics Data System (ADS)
Yang, Xiao-Jun; Machado, J. A. Tenreiro
2017-09-01
In this paper, a new fractional operator of variable order with the use of the monotonic increasing function is proposed in sense of Caputo type. The properties in term of the Laplace and Fourier transforms are analyzed and the results for the anomalous diffusion equations of variable order are discussed. The new formulation is efficient in modeling a class of concentrations in the complex transport process.
Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis
Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.
2003-01-01
This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.
Process for Operating a Dual-Mode Combustor
NASA Technical Reports Server (NTRS)
Trefny, Charles J. (Inventor); Dippold, Vance F. (Inventor)
2017-01-01
A new dual-mode ramjet combustor used for operation over a wide flight Mach number range is described. Subsonic combustion mode is usable to lower flight Mach numbers than current dual-mode scramjets. High speed mode is characterized by supersonic combustion in a free-jet that traverses the subsonic combustion chamber to a variable nozzle throat. Although a variable combustor exit aperture is required, the need for fuel staging to accommodate the combustion process is eliminated. Local heating from shock-boundary-layer interactions on combustor walls is also eliminated.
Variable Order and Distributed Order Fractional Operators
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
2002-01-01
Many physical processes appear to exhibit fractional order behavior that may vary with time or space. The continuum of order in the fractional calculus allows the order of the fractional operator to be considered as a variable. This paper develops the concept of variable and distributed order fractional operators. Definitions based on the Riemann-Liouville definitions are introduced and behavior of the operators is studied. Several time domain definitions that assign different arguments to the order q in the Riemann-Liouville definition are introduced. For each of these definitions various characteristics are determined. These include: time invariance of the operator, operator initialization, physical realization, linearity, operational transforms. and memory characteristics of the defining kernels. A measure (m2) for memory retentiveness of the order history is introduced. A generalized linear argument for the order q allows the concept of "tailored" variable order fractional operators whose a, memory may be chosen for a particular application. Memory retentiveness (m2) and order dynamic behavior are investigated and applications are shown. The concept of distributed order operators where the order of the time based operator depends on an additional independent (spatial) variable is also forwarded. Several definitions and their Laplace transforms are developed, analysis methods with these operators are demonstrated, and examples shown. Finally operators of multivariable and distributed order are defined in their various applications are outlined.
Murtazina, E P
2015-01-01
Investigation of the processes of studying human instructions relevant follow-up in terms of systemic mechanisms of learning and memory processes, and moreover affects such a fundamental issue as psychophysiology focused attention, understanding the meaning of the information provided and the formation of social motivation in human activities. Analysis of heart rate variability in reading the instructions compared to the initial state of operational rest showed that this stage of the activity causes pronounced emotional stress, which is manifested in increased heart rate, decrease in variability and pronounced changes in the spectral characteristics of heart rate. Besides, it was revealed that heart rate variability in a state of operational rest before testing, and in the process of reading instructions positively correlated with the duration of the instruction reading and inversely correlated with effectiveness and the level of resistance of the subjects to the error after error when follow-up activities. Showing pronounced gender differences in the relationships between changes in the variability of heart rate when reading the instructions and the subsequent execution indicators of visual-motor test.
Chung, Ji-Woo; Kim, Kyung-Min; Yoon, Tae-Ung; Kim, Seung-Ik; Jung, Tae-Sung; Han, Sang-Sup; Bae, Youn-Sang
2017-12-22
A novel power partial-discard (PPD) strategy was developed as a variant of the partial-discard (PD) operation to further improve the separation performance of the simulated moving bed (SMB) process. The PPD operation varied the flow rates of discard streams by introducing a new variable, the discard amount (DA) as well as varying the reported variable, discard length (DL), while the conventional PD used fixed discard flow rates. The PPD operations showed significantly improved purities in spite of losses in recoveries. Remarkably, the PPD operation could provide more enhanced purity for a given recovery or more enhanced recovery for a given purity than the PD operation. The two variables, DA and DL, in the PPD operation played a key role in achieving the desired purity and recovery. The PPD operations will be useful for attaining high-purity products with reasonable recoveries. Copyright © 2017 Elsevier B.V. All rights reserved.
Barbagallo, Simone; Corradi, Luca; de Ville de Goyet, Jean; Iannucci, Marina; Porro, Ivan; Rosso, Nicola; Tanfani, Elena; Testi, Angela
2015-05-17
The Operating Room (OR) is a key resource of all major hospitals, but it also accounts for up 40% of resource costs. Improving cost effectiveness, while maintaining a quality of care, is a universal objective. These goals imply an optimization of planning and a scheduling of the activities involved. This is highly challenging due to the inherent variable and unpredictable nature of surgery. A Business Process Modeling Notation (BPMN 2.0) was used for the representation of the "OR Process" (being defined as the sequence of all of the elementary steps between "patient ready for surgery" to "patient operated upon") as a general pathway ("path"). The path was then both further standardized as much as possible and, at the same time, keeping all of the key-elements that would allow one to address or define the other steps of planning, and the inherent and wide variability in terms of patient specificity. The path was used to schedule OR activity, room-by-room, and day-by-day, feeding the process from a "waiting list database" and using a mathematical optimization model with the objective of ending up in an optimized planning. The OR process was defined with special attention paid to flows, timing and resource involvement. Standardization involved a dynamics operation and defined an expected operating time for each operation. The optimization model has been implemented and tested on real clinical data. The comparison of the results reported with the real data, shows that by using the optimization model, allows for the scheduling of about 30% more patients than in actual practice, as well as to better exploit the OR efficiency, increasing the average operating room utilization rate up to 20%. The optimization of OR activity planning is essential in order to manage the hospital's waiting list. Optimal planning is facilitated by defining the operation as a standard pathway where all variables are taken into account. By allowing a precise scheduling, it feeds the process of planning and, further up-stream, the management of a waiting list in an interactive and bi-directional dynamic process.
Natural language processing to ascertain two key variables from operative reports in ophthalmology.
Liu, Liyan; Shorstein, Neal H; Amsden, Laura B; Herrinton, Lisa J
2017-04-01
Antibiotic prophylaxis is critical to ophthalmology and other surgical specialties. We performed natural language processing (NLP) of 743 838 operative notes recorded for 315 246 surgeries to ascertain two variables needed to study the comparative effectiveness of antibiotic prophylaxis in cataract surgery. The first key variable was an exposure variable, intracameral antibiotic injection. The second was an intraoperative complication, posterior capsular rupture (PCR), which functioned as a potential confounder. To help other researchers use NLP in their settings, we describe our NLP protocol and lessons learned. For each of the two variables, we used SAS Text Miner and other SAS text-processing modules with a training set of 10 000 (1.3%) operative notes to develop a lexicon. The lexica identified misspellings, abbreviations, and negations, and linked words into concepts (e.g. "antibiotic" linked with "injection"). We confirmed the NLP tools by iteratively obtaining random samples of 2000 (0.3%) notes, with replacement. The NLP tools identified approximately 60 000 intracameral antibiotic injections and 3500 cases of PCR. The positive and negative predictive values for intracameral antibiotic injection exceeded 99%. For the intraoperative complication, they exceeded 94%. NLP was a valid and feasible method for obtaining critical variables needed for a research study of surgical safety. These NLP tools were intended for use in the study sample. Use with external datasets or future datasets in our own setting would require further testing. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Natural Language Processing to Ascertain Two Key Variables from Operative Reports in Ophthalmology
Liu, Liyan; Shorstein, Neal H.; Amsden, Laura B; Herrinton, Lisa J.
2016-01-01
Purpose Antibiotic prophylaxis is critical to ophthalmology and other surgical specialties. We performed natural language processing (NLP) of 743,838 operative notes recorded for 315,246 surgeries to ascertain two variables needed to study the comparative effectiveness of antibiotic prophylaxis in cataract surgery. The first key variable was an exposure variable, intracameral antibiotic injection. The second was an intraoperative complication, posterior capsular rupture (PCR), that functioned as a potential confounder. To help other researchers use NLP in their settings, we describe our NLP protocol and lessons learned. Methods For each of the two variables, we used SAS Text Miner and other SAS text-processing modules with a training set of 10,000 (1.3%) operative notes to develop a lexicon. The lexica identified misspellings, abbreviations, and negations, and linked words into concepts (e.g., “antibiotic” linked with “injection”). We confirmed the NLP tools by iteratively obtaining random samples of 2,000 (0.3%) notes, with replacement. Results The NLP tools identified approximately 60,000 intracameral antibiotic injections and 3,500 cases of PCR. The positive and negative predictive values for intracameral antibiotic injection exceeded 99%. For the intraoperative complication, they exceeded 94%. Conclusion NLP was a valid and feasible method for obtaining critical variables needed for a research study of surgical safety. These NLP tools were intended for use in the study sample. Use with external datasets or future datasets in our own setting would require further testing. PMID:28052483
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glosser, D.; Kutchko, B.; Benge, G.
Foamed cement is a critical component for wellbore stability. The mechanical performance of a foamed cement depends on its microstructure, which in turn depends on the preparation method and attendant operational variables. Determination of cement stability for field use is based on laboratory testing protocols governed by API Recommended Practice 10B-4 (API RP 10B-4, 2015). However, laboratory and field operational variables contrast considerably in terms of scale, as well as slurry mixing and foaming processes. Here in this paper, laboratory and field operational processes are characterized within a physics-based framework. It is shown that the “atomization energy” imparted by themore » high pressure injection of nitrogen gas into the field mixed foamed cement slurry is – by a significant margin – the highest energy process, and has a major impact on the void system in the cement slurry. There is no analog for this high energy exchange in current laboratory cement preparation and testing protocols. Quantifying the energy exchanges across the laboratory and field processes provides a basis for understanding relative impacts of these variables on cement structure, and can ultimately lead to the development of practices to improve cement testing and performance.« less
Implementation of in-line infrared monitor in full-scale anaerobic digestion process.
Spanjers, H; Bouvier, J C; Steenweg, P; Bisschops, I; van Gils, W; Versprille, B
2006-01-01
During start up but also during normal operation, anaerobic reactor systems should be run and monitored carefully to secure trouble-free operation, because the process is vulnerable to disturbances such as temporary overloading, biomass wash out and influent toxicity. The present method of monitoring is usually by manual sampling and subsequent laboratory analysis. Data collection, processing and feedback to system operation is manual and ad hoc, and involves high-level operator skills and attention. As a result, systems tend to be designed at relatively conservative design loading rates resulting in significant over-sizing of reactors and thus increased systems cost. It is therefore desirable to have on-line and continuous access to performance data on influent and effluent quality. Relevant variables to indicate process performance include VFA, COD, alkalinity, sulphate, and, if aerobic post-treatment is considered, total nitrogen, ammonia and nitrate. Recently, mid-IR spectrometry was demonstrated on a pilot scale to be suitable for in-line simultaneous measurement of these variables. This paper describes a full-scale application of the technique to test its ability to monitor continuously and without human intervention the above variables simultaneously in two process streams. For VFA, COD, sulphate, ammonium and TKN good agreement was obtained between in-line and manual measurements. During a period of six months the in-line measurements had to be interrupted several times because of clogging. It appeared that the sample pre-treatment unit was not able to cope with high solids concentrations all the time.
Subwavelength grating enabled on-chip ultra-compact optical true time delay line
Wang, Junjia; Ashrafi, Reza; Adams, Rhys; Glesk, Ivan; Gasulla, Ivana; Capmany, José; Chen, Lawrence R.
2016-01-01
An optical true time delay line (OTTDL) is a basic photonic building block that enables many microwave photonic and optical processing operations. The conventional design for an integrated OTTDL that is based on spatial diversity uses a length-variable waveguide array to create the optical time delays, which can introduce complexities in the integrated circuit design. Here we report the first ever demonstration of an integrated index-variable OTTDL that exploits spatial diversity in an equal length waveguide array. The approach uses subwavelength grating waveguides in silicon-on-insulator (SOI), which enables the realization of OTTDLs having a simple geometry and that occupy a compact chip area. Moreover, compared to conventional wavelength-variable delay lines with a few THz operation bandwidth, our index-variable OTTDL has an extremely broad operation bandwidth practically exceeding several tens of THz, which supports operation for various input optical signals with broad ranges of central wavelength and bandwidth. PMID:27457024
Subwavelength grating enabled on-chip ultra-compact optical true time delay line.
Wang, Junjia; Ashrafi, Reza; Adams, Rhys; Glesk, Ivan; Gasulla, Ivana; Capmany, José; Chen, Lawrence R
2016-07-26
An optical true time delay line (OTTDL) is a basic photonic building block that enables many microwave photonic and optical processing operations. The conventional design for an integrated OTTDL that is based on spatial diversity uses a length-variable waveguide array to create the optical time delays, which can introduce complexities in the integrated circuit design. Here we report the first ever demonstration of an integrated index-variable OTTDL that exploits spatial diversity in an equal length waveguide array. The approach uses subwavelength grating waveguides in silicon-on-insulator (SOI), which enables the realization of OTTDLs having a simple geometry and that occupy a compact chip area. Moreover, compared to conventional wavelength-variable delay lines with a few THz operation bandwidth, our index-variable OTTDL has an extremely broad operation bandwidth practically exceeding several tens of THz, which supports operation for various input optical signals with broad ranges of central wavelength and bandwidth.
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-05-01
Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1
NASA Astrophysics Data System (ADS)
Kovtun, V. S.
2012-12-01
Traditionally, management of propellant fuel consumption on board of a spacecraft is only associated with the operation of jet-propulsion engines (JPE) that are actuator devices of motion control systems (MCS). The efficiency of propellant fuel consumption depends not only on the operation of the MCS, but also, to one extent or another, on all systems functioning on board of a spacecraft, and on processes that occur in them and involve conversion of variable management of propellant fuel consumption by JPEs as a constituent part of the control of the complex process of spacecraft flight.
Bonaretti, Serena; Vilayphiou, Nicolas; Chan, Caroline Mai; Yu, Andrew; Nishiyama, Kyle; Liu, Danmei; Boutroy, Stephanie; Ghasem-Zadeh, Ali; Boyd, Steven K.; Chapurlat, Roland; McKay, Heather; Shane, Elizabeth; Bouxsein, Mary L.; Black, Dennis M.; Majumdar, Sharmila; Orwoll, Eric S.; Lang, Thomas F.; Khosla, Sundeep; Burghardt, Andrew J.
2017-01-01
Introduction HR-pQCT is increasingly used to assess bone quality, fracture risk and anti-fracture interventions. The contribution of the operator has not been adequately accounted in measurement precision. Operators acquire a 2D projection (“scout view image”) and define the region to be scanned by positioning a “reference line” on a standard anatomical landmark. In this study, we (i) evaluated the contribution of positioning variability to in vivo measurement precision, (ii) measured intra- and inter-operator positioning variability, and (iii) tested if custom training software led to superior reproducibility in new operators compared to experienced operators. Methods To evaluate the operator in vivo measurement precision we compared precision errors calculated in 64 co-registered and non-co-registered scan-rescan images. To quantify operator variability, we developed software that simulates the positioning process of the scanner’s software. Eight experienced operators positioned reference lines on scout view images designed to test intra- and inter-operator reproducibility. Finally, we developed modules for training and evaluation of reference line positioning. We enrolled 6 new operators to participate in a common training, followed by the same reproducibility experiments performed by the experienced group. Results In vivo precision errors were up to three-fold greater (Tt.BMD and Ct.Th) when variability in scan positioning was included. Inter-operator precision errors were significantly greater than short-term intra-operator precision (p<0.001). New trained operators achieved comparable intra-operator reproducibility to experienced operators, and lower inter-operator reproducibility (p<0.001). Precision errors were significantly greater for the radius than for the tibia. Conclusion Operator reference line positioning contributes significantly to in vivo measurement precision and is significantly greater for multi-operator datasets. Inter-operator variability can be significantly reduced using a systematic training platform, now available online (http://webapps.radiology.ucsf.edu/refline/). PMID:27475931
Babamoradi, Hamid; van den Berg, Frans; Rinnan, Åsmund
2016-02-18
In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts. Copyright © 2016 Elsevier B.V. All rights reserved.
76 FR 30246 - Loan Policies and Operations; Loan Purchases From FDIC
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-25
... source of credit and liquidity to borrowers whose operations are financed with System eligible... sound operation of System business. \\6\\ See NationsBank of North Carolina, N.A. v. Variable Annuity Life... provided restructuring financing, because the restructuring process takes time. One System institution...
Operator agency in process intervention: tampering versus application of tacit knowledge
NASA Astrophysics Data System (ADS)
Van Gestel, P.; Pons, D. J.; Pulakanam, V.
2015-09-01
Statistical process control (SPC) theory takes a negative view of adjustment of process settings, which is termed tampering. In contrast, quality and lean programmes actively encourage operators to acts of intervention and personal agency in the improvement of production outcomes. This creates a conflict that requires operator judgement: How does one differentiate between unnecessary tampering and needful intervention? Also, difficult is that operators apply tacit knowledge to such judgements. There is a need to determine where in a given production process the operators are applying tacit knowledge, and whether this is hindering or aiding quality outcomes. The work involved the conjoint application of systems engineering, statistics, and knowledge management principles, in the context of a case study. Systems engineering was used to create a functional model of a real plant. Actual plant data were analysed with the statistical methods of ANOVA, feature selection, and link analysis. This identified the variables to which the output quality was most sensitive. These key variables were mapped back to the functional model. Fieldwork was then directed to those areas to prospect for operator judgement activities. A natural conversational approach was used to determine where and how operators were applying judgement. This contrasts to the interrogative approach of conventional knowledge management. Data are presented for a case study of a meat rendering plant. The results identify specific areas where operators' tacit knowledge and mental model contribute to quality outcomes and untangles the motivations behind their agency. Also evident is how novice and expert operators apply their knowledge differently. Novices were focussed on meeting throughput objectives, and their incomplete understanding of the plant characteristics led them to inadvertently sacrifice quality in the pursuit of productivity in certain situations. Operators' responses to the plant are affected by their individual mental models of the plant, which differ between operators and have variable validity. Their behaviour is also affected by differing interpretations of how their personal agency should be applied to the achievement of production objectives. The methodology developed here is an integration of systems engineering, statistical analysis, and knowledge management. It shows how to determine where in a given production process the operator intervention is occurring, how it affects quality outcomes, and what tacit knowledge operators are using. It thereby assists the continuous quality improvement processes in a different way to SPC. A second contribution is the provision of a novel methodology for knowledge management, one that circumvents the usual codification barriers to knowledge management.
NASA Astrophysics Data System (ADS)
Lindstrom, Erik Vilhelm Mathias
Gasification of black liquor could drastically increase the flexibility and improve the profit potential of a mature industry. The completed work was focused on research around the economics and benefits of its implementation, utilizing laboratory pulping experiments and process simulation. The separation of sodium and sulfur achieved through gasification of recovered black liquor, can be utilized in processes like modified continuous cooking, split sulfidity and green liquor pretreatment pulping, and polysulfide-anthraquinone pulping, to improve pulp yield and properties. Laboratory pulping protocols have been developed for these modified pulping technologies and different process options evaluated. The process simulation work around BLG has led to the development of a WinGEMS module for the low temperature MTCI steam reforming process, and case studies comparing a simulated conventional kraft process to different process options built around the implementation of a BLG unit operation into the kraft recovery cycle. Pulp yield increases of 1-3% points with improved product quality, and the potential for capital and operating cost savings relative to the conventional kraft process have been demonstrated. Process simulation work has shown that the net variable operating cost for a pulping process using BLGCC is highly dependent on the cost of lime kiln fuel and the selling price of green power to the grid. Under the assumptions taken in the performed case study, the BLGCC process combined with split sulfidity or PSAQ pulping operations had net variable operating cost 2-4% greater than the kraft reference. The influence of the sales price of power to the grid is the most significant cost factor. If a sales price increase to 6 ¢/KWh for green power could be achieved, cost savings of about $40/ODtP could be realized in all investigated BLG processes. Other alternatives to improve the process economics around BLG would be to modify or eliminate the lime kiln unit operations, utilizing high sulfidity green liquor pretreatment, PSAQ with auto-causticization, or converting the process to mini-sulfide sulfite-AQ.
The application of vacuum redistillation of patchouli oil to improve patchouli alcohol compound
NASA Astrophysics Data System (ADS)
Asnawi, T. M.; Alam, P. N.; Husin, H.; Zaki, M.
2018-04-01
Patchouli oil produced by traditional distillation of patchouli leaves and stems by farmers in Aceh still has low patchouli alcohol compound. In order to increase patchouli alcohol concentration, vacuum redistillation process using packed column was introduced. This research was conducted to fractionate terpene (alpha-copinene) from oxygenated hydrocarbon (patchouli alcohol) compound. The operation condition was conducted at two variables that was dependent variable and independent variable. The dependent variable was the 30 cm height distillation packed column, by using raschig ring with 8 mm x 8 mm dimension. And the independent variable was operating temperature 130 °C and 140 °C., vacuum pressure 143,61 mbar, 121,60 mbar and 88,59 mbar and operation time 2 hours, 3 hours and 5 hours. Total of treatments applied in this works were 3 x 3 x 3 or equal to 27 treatments. Patchouli oil used in this research was obtained from Desa Teladan-Lembah Seulawah, Aceh Province. The initial patchouli alcohol compound which analyzed with GC-MS contained 16,02% before treatment applied. After vacuum redistillation process treatment applied patchouli oil concentration increase up to 34,67%. Physico-chemical test of patchouli oil after vacuum redistillation is in accordance with SNI 06-23852006 standard.
Hybrid performance measurement of a business process outsourcing - A Malaysian company perspective
NASA Astrophysics Data System (ADS)
Oluyinka, Oludapo Samson; Tamyez, Puteri Fadzline; Kie, Cheng Jack; Freida, Ayodele Ozavize
2017-05-01
It's no longer new that customer perceived value for product and services are now greatly influenced by its psychological and social advantages. In order to meet up with the increasing operational cost, response time, quality and innovative capabilities many companies turned their fixed operational cost to a variable cost through outsourcing. Hence, the researcher explored different underlying outsourcing theories and infer that these theories are essential to performance improvement. In this study, the researcher evaluates the performance of a business process outsource company by a combination of lean and agile method. To test the hypotheses, we analyze different variability that a business process company faces, how lean and agile have been used in other industry to address such variability and discuss the result using a predictive multiple regression analysis on data collected from companies in Malaysia. The findings from this study revealed that while each method has its own advantage, a business process outsource company could achieve more (up to 87%) increase in performance level by developing a strategy which focuses on a perfect mixture of lean and agile improvement methods. Secondly, this study shows that performance indicator could be better evaluated with non-metrics variables of the agile method. Thirdly, this study also shows that business process outsourcing company could perform better when they concentrate more on strengthening internal process integration of employees.
Science--A Process Approach, Product Development Report No. 8.
ERIC Educational Resources Information Center
Sanderson, Barbara A.; Kratochvil, Daniel W.
Science - A Process Approach, a science program for grades kindergarten through sixth, mainly focuses on scientific processes: observing, classifying, using numbers, measuring, space/time relationships, communicating, predicting, inferring, defining operationally, formulating hypotheses, interpreting data, controlling variables, and experimenting.…
Tchamna, Rodrigue; Lee, Moonyong
2018-01-01
This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
How to decompose arbitrary continuous-variable quantum operations.
Sefi, Seckin; van Loock, Peter
2011-10-21
We present a general, systematic, and efficient method for decomposing any given exponential operator of bosonic mode operators, describing an arbitrary multimode Hamiltonian evolution, into a set of universal unitary gates. Although our approach is mainly oriented towards continuous-variable quantum computation, it may be used more generally whenever quantum states are to be transformed deterministically, e.g., in quantum control, discrete-variable quantum computation, or Hamiltonian simulation. We illustrate our scheme by presenting decompositions for various nonlinear Hamiltonians including quartic Kerr interactions. Finally, we conclude with two potential experiments utilizing offline-prepared optical cubic states and homodyne detections, in which quantum information is processed optically or in an atomic memory using quadratic light-atom interactions. © 2011 American Physical Society
Geng, Qijin; Tang, Shankang; Wang, Lintong; Zhang, Yunchen
2015-01-01
The adsorption and photocatalytic degradation of gaseous benzene were investigated considering the operating variables and kinetic mechanism using nano-titania agglomerates in an annular fluidized bed photocatalytic reactor (AFBPR) designed. The special adsorption equilibrium constant, adsorption active sites, and apparent reaction rate coefficient of benzene were determined by linear regression analysis at various gas velocities and relative humidities (RH). Based on a series of photocatalytic degradation kinetic equations, the influences of operating variables on degradation efficiency, apparent reaction rate coefficient and half-life were explored. The findings indicated that the operating variables have obviously influenced the adsorption/photocatalytic degradation and corresponding kinetic parameters. In the photocatalytic degradation process, the relationship between photocatalytic degradation efficiency and RH indicated that water molecules have a dual-function which was related to the structure characteristics of benzene. The optimal operating conditions for photocatalytic degradation of gaseous benzene in AFBPR were determined as the fluidization number at 1.9 and RH required related to benzene concentration. This investigation highlights the importance of controlling RH and benzene concentration in order to obtain the desired synergy effect in photocatalytic degradation processes.
Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.
Lau, Hoi-Kwan; Plenio, Martin B
2016-09-02
Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.
Universal Quantum Computing with Arbitrary Continuous-Variable Encoding
NASA Astrophysics Data System (ADS)
Lau, Hoi-Kwan; Plenio, Martin B.
2016-09-01
Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Gao, David Wenzhong; Ibanez, Eduardo
2016-12-01
The electric power system has continuously evolved in order to accommodate new technologies and operating strategies. As the penetration of integrated variable generation in the system increases, it is beneficial to develop strategies that can help mitigate their effect on the grid. Historically, power system operators have held excess capacity during the commitment and dispatch process to allow the system to handle unforeseen load ramping events. As variable generation resources increase, sufficient flexibility scheduled in the system is required to ensure that system performance is not deteriorated in the presence of additional variability and uncertainty. This paper presents a systematicmore » comparison of various flexibility reserve strategies. Several of them are implemented and applied in a common test system, in order to evaluate their effect on the economic and reliable operations. Furthermore, a three stage reserve modifier algorithm is proposed and evaluated for its ability to improve system performance.« less
NASA Astrophysics Data System (ADS)
Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose
2018-06-01
An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.
Validating the Airspace Concept Evaluation System for Different Weather Days
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Meyn, Larry
2006-01-01
This paper extends the process for validating the Airspace Concept Evaluation System using real-world historical flight operational data. System inputs such as flight plans and airport en-route capacities, are generated and processed to create a realistic reproduction of a single day's operations within the National Airspace System. System outputs such as airport throughput, delays, and en-route sector loads are then compared to real world operational metrics and delay statistics for the reproduced day. The process is repeated for 4 historical days with high and low traffic volume and delay attributed to weather. These 4 days are simulated using default en-route capacities and variable en-route capacities used to emulate weather. The validation results show that default enroute capacity simulations are closer to real-world data for low weather days than high weather days. The use of reduced variable enroute capacities adds a large delay bias to ACES but delay trends between weather days are better represented.
Artificial Intelligence Tools for Scaling Up of High Shear Wet Granulation Process.
Landin, Mariana
2017-01-01
The results presented in this article demonstrate the potential of artificial intelligence tools for predicting the endpoint of the granulation process in high-speed mixer granulators of different scales from 25L to 600L. The combination of neurofuzzy logic and gene expression programing technologies allowed the modeling of the impeller power as a function of operation conditions and wet granule properties, establishing the critical variables that affect the response and obtaining a unique experimental polynomial equation (transparent model) of high predictability (R 2 > 86.78%) for all size equipment. Gene expression programing allowed the modeling of the granulation process for granulators of similar and dissimilar geometries and can be improved by implementing additional characteristics of the process, as composition variables or operation parameters (e.g., batch size, chopper speed). The principles and the methodology proposed here can be applied to understand and control manufacturing process, using any other granulation equipment, including continuous granulation processes. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Space Transportation Operations: Assessment of Methodologies and Models
NASA Technical Reports Server (NTRS)
Joglekar, Prafulla
2001-01-01
The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.
Space Transportation Operations: Assessment of Methodologies and Models
NASA Technical Reports Server (NTRS)
Joglekar, Prafulla
2002-01-01
The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Lawrence E.
This report provides findings from the field regarding the best ways in which to guide operational strategies, business processes and control room tools to support the integration of renewable energy into electrical grids.
Breaking the trade-off between efficiency and service.
Frei, Frances X
2006-11-01
For manufacturers, customers are the open wallets at the end of the supply chain. But for most service businesses, they are key inputs to the production process. Customers introduce tremendous variability to that process, but they also complain about any lack of consistency and don't care about the company's profit agenda. Managing customer-introduced variability, the author argues, is a central challenge for service companies. The first step is to diagnose which type of variability is causing mischief: Customers may arrive at different times, request different kinds of service, possess different capabilities, make varying degrees of effort, and have different personal preferences. Should companies accommodate variability or reduce it? Accommodation often involves asking employees to compensate for the variations among customers--a potentially costly solution. Reduction often means offering a limited menu of options, which may drive customers away. Some companies have learned to deal with customer-introduced variability without damaging either their operating environments or customers' service experiences. Starbucks, for example, handles capability variability among its customers by teaching them the correct ordering protocol. Dell deals with arrival and request variability in its high-end server business by outsourcing customer service while staying in close touch with customers to discuss their needs and assess their experiences with third-party providers. The effective management of variability often requires a company to influence customers' behavior. Managers attempting that kind of intervention can follow a three-step process: diagnosing the behavioral problem, designing an operating role for customers that creates new value for both parties, and testing and refining approaches for influencing behavior.
Parareal algorithms with local time-integrators for time fractional differential equations
NASA Astrophysics Data System (ADS)
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
Theoretical and experimental researches on the operating costs of a wastewater treatment plant
NASA Astrophysics Data System (ADS)
Panaitescu, M.; Panaitescu, F.-V.; Anton, I.-A.
2015-11-01
Purpose of the work: The total cost of a sewage plants is often determined by the present value method. All of the annual operating costs for each process are converted to the value of today's correspondence and added to the costs of investment for each process, which leads to getting the current net value. The operating costs of the sewage plants are subdivided, in general, in the premises of the investment and operating costs. The latter can be stable (normal operation and maintenance, the establishment of power) or variables (chemical and power sludge treatment and disposal, of effluent charges). For the purpose of evaluating the preliminary costs so that an installation can choose between different alternatives in an incipient phase of a project, can be used cost functions. In this paper will be calculated the operational cost to make several scenarios in order to optimize its. Total operational cost (fixed and variable) is dependent global parameters of wastewater treatment plant. Research and methodology: The wastewater treatment plant costs are subdivided in investment and operating costs. We can use different cost functions to estimate fixed and variable operating costs. In this study we have used the statistical formulas for cost functions. The method which was applied to study the impact of the influent characteristics on the costs is economic analysis. Optimization of plant design consist in firstly, to assess the ability of the smallest design to treat the maximum loading rates to a given effluent quality and, secondly, to compare the cost of the two alternatives for average and maximum loading rates. Results: In this paper we obtained the statistical values for the investment cost functions, operational fixed costs and operational variable costs for wastewater treatment plant and its graphical representations. All costs were compared to the net values. Finally we observe that it is more economical to build a larger plant, especially if maximum loading rates are reached. The actual target of operational management is to directly implement the presented cost functions in a software tool, in which the design of a plant and the simulation of its behaviour are evaluated simultaneously.
The Research on Linux Memory Forensics
NASA Astrophysics Data System (ADS)
Zhang, Jun; Che, ShengBing
2018-03-01
Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.
Development of uniform and predictable battery materials for nickel-cadmium aerospace cells
NASA Technical Reports Server (NTRS)
1971-01-01
Battery materials and manufacturing methods were analyzed with the aim of developing uniform and predictable battery plates for nickel cadmium aerospace cells. A study is presented for the high temperature electrochemical impregnation process for the preparation of nickel cadmium battery plates. This comparative study is set up as a factorially designed experiment to examine both manufacturing and operational variables and any interaction that might exist between them. The manufacturing variables in the factorial design include plaque preparative method, plaque porosity and thickness, impregnation method, and loading, The operational variables are type of duty cycle, charge and discharge rate, extent of overcharge, and depth of discharge.
Fuzzy simulation in concurrent engineering
NASA Technical Reports Server (NTRS)
Kraslawski, A.; Nystrom, L.
1992-01-01
Concurrent engineering is becoming a very important practice in manufacturing. A problem in concurrent engineering is the uncertainty associated with the values of the input variables and operating conditions. The problem discussed in this paper concerns the simulation of processes where the raw materials and the operational parameters possess fuzzy characteristics. The processing of fuzzy input information is performed by the vertex method and the commercial simulation packages POLYMATH and GEMS. The examples are presented to illustrate the usefulness of the method in the simulation of chemical engineering processes.
Marković, Snežana; Kerč, Janez; Horvat, Matej
2017-03-01
We are presenting a new approach of identifying sources of variability within a manufacturing process by NIR measurements of samples of intermediate material after each consecutive unit operation (interprocess NIR sampling technique). In addition, we summarize the development of a multivariate statistical process control (MSPC) model for the production of enteric-coated pellet product of the proton-pump inhibitor class. By developing provisional NIR calibration models, the identification of critical process points yields comparable results to the established MSPC modeling procedure. Both approaches are shown to lead to the same conclusion, identifying parameters of extrusion/spheronization and characteristics of lactose that have the greatest influence on the end-product's enteric coating performance. The proposed approach enables quicker and easier identification of variability sources during manufacturing process, especially in cases when historical process data is not straightforwardly available. In the presented case the changes of lactose characteristics are influencing the performance of the extrusion/spheronization process step. The pellet cores produced by using one (considered as less suitable) lactose source were on average larger and more fragile, leading to consequent breakage of the cores during subsequent fluid bed operations. These results were confirmed by additional experimental analyses illuminating the underlying mechanism of fracture of oblong pellets during the pellet coating process leading to compromised film coating.
PHYSICAL AND OPTICAL PROPERTIES OF STEAM-EXPLODED LASER-PRINTED PAPER
Laser-printed paper was pulped by the steam-explosion process. A full-factorial experimental design was applied to determine the effects of key operating variables on the properties of steam-exploded pulp. The variables were addition level for pulping chemicals (NaOH and/or Na2SO...
Huben, Neil; Hussein, Ahmed; May, Paul; Whittum, Michelle; Kraswowki, Collin; Ahmed, Youssef; Jing, Zhe; Khan, Hijab; Kim, Hyung; Schwaab, Thomas; Underwood Iii, Willie; Kauffman, Eric; Mohler, James L; Guru, Khurshid A
2018-04-10
To develop a methodology for predicting operative times for robot-assisted radical prostatectomy (RARP) using preoperative patient, disease, procedural and surgeon variables to facilitate operating room (OR) scheduling. The model included preoperative metrics: BMI, ASA score, clinical stage, National Comprehensive Cancer Network (NCCN) risk, prostate weight, nerve-sparing status, extent and laterality of lymph node dissection, and operating surgeon (6 surgeons were included in the study). A binary decision tree was fit using a conditional inference tree method to predict operative times. The variables most associated with operative time were determined using permutation tests. The data was split at the value of the variable that results in the largest difference in means for surgical time across the split. This process was repeated recursively on the resultant data. 1709 RARPs were included. The variable most strongly associated with operative time was the surgeon (surgeons 2 and 4 - 102 minutes shorter than surgeons 1, 3, 5, and 6, p<0.001). Among surgeons 2 and 4, BMI had the strongest association with surgical time (p<0.001). Among patients operated by surgeons 1, 3, 5 and 6, RARP time was again most strongly associated with the surgeon performing RARP. Surgeons 1, 3, and 6 were on average 76 minutes faster than surgeon 5 (p<0.001). The regression tree output in the form of box plots showed operative time median and ranges according to patient, disease, procedural and surgeon metrics. We developed a methodology that can predict operative times for RARP based on patient, disease and surgeon variables. This methodology can be utilized for quality control, facilitate OR scheduling and maximize OR efficiency.
Examination of a carton sealing line using a thermographic scanner
NASA Astrophysics Data System (ADS)
Kleinfeld, Jack M.
1999-03-01
The study of the operation and performance of natural gas fired sealing lines for polyethylene coated beverage containers was performed. Both thermal and geometric data was abstracted from the thermal scans and used to characterize the performance of the sealing line. The impact of process operating variables such as line speed and carton to carton spacing was studied. Recommendations for system improvements, instrumentation and process control were made.
ERIC Educational Resources Information Center
Bermani, Michelle Ines
2017-01-01
In this quantitative and qualitative mixed study, the researcher focused on a range of factors that drive principals' decision making and examined the variables that affect principals' decision-making. The study assessed the extent to which principals' leadership and decision-making processes exert influence on the operations of inclusion…
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
Accelerated design of bioconversion processes using automated microscale processing techniques.
Lye, Gary J; Ayazi-Shamlou, Parviz; Baganz, Frank; Dalby, Paul A; Woodley, John M
2003-01-01
Microscale processing techniques are rapidly emerging as a means to increase the speed of bioprocess design and reduce material requirements. Automation of these techniques can reduce labour intensity and enable a wider range of process variables to be examined. This article examines recent research on various individual microscale unit operations including microbial fermentation, bioconversion and product recovery techniques. It also explores the potential of automated whole process sequences operated in microwell formats. The power of the whole process approach is illustrated by reference to a particular bioconversion, namely the Baeyer-Villiger oxidation of bicyclo[3.2.0]hept-2-en-6-one for the production of optically pure lactones.
Temporal Proof Methodologies for Real-Time Systems,
1990-09-01
real time systems that communicate either through shared variables or by message passing and real time issues such as time-outs, process priorities (interrupts) and process scheduling. The authors exhibit two styles for the specification of real - time systems . While the first approach uses bounded versions of temporal operators the second approach allows explicit references to time through a special clock variable. Corresponding to two styles of specification the authors present and compare two fundamentally different proof
Quantum information processing with a travelling wave of light
NASA Astrophysics Data System (ADS)
Serikawa, Takahiro; Shiozawa, Yu; Ogawa, Hisashi; Takanashi, Naoto; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira
2018-02-01
We exploit quantum information processing on a traveling wave of light, expecting emancipation from thermal noise, easy coupling to fiber communication, and potentially high operation speed. Although optical memories are technically challenging, we have an alternative approach to apply multi-step operations on traveling light, that is, continuous-variable one-way computation. So far our achievement includes generation of a one-million-mode entangled chain in time-domain, mode engineering of nonlinear resource states, and real-time nonlinear feedforward. Although they are implemented with free space optics, we are also investigating photonic integration and performed quantum teleportation with a passive liner waveguide chip as a demonstration of entangling, measurement, and feedforward. We also suggest a loop-based architecture as another model of continuous-variable computing.
Rolling-element bearings: A review of the state of the art
NASA Technical Reports Server (NTRS)
Anderson, W. J.; Zaretsky, E. V.
1973-01-01
Some of the research conducted which has brought rolling-element technology to its present state is discussed. Areas touched upon are material effects, processing variables, operating variables, design optimization, lubricant effects and lubrication methods. Finally, problem areas are discussed in relation to the present state-of-the-art and anticipated requirements.
Jiménez, Lucero; Arriaga, Sonia; Aizpuru, Aitor
2016-01-01
Biofiltration of volatile organic compounds is still considered an emerging technology. Its reliability remains questionable as no data is available regarding process intrinsic repeatability. Herein, two identically operated toluene biofiltration systems are comprehensively compared, during long-term operation (129 days). Globally, reactors responded very similarly, even during transient conditions, with, for example, strong biological activities from the first days of operation, and comparable periods of lower removal efficiency (81.2%) after exposure to high inlet loads (140 g m(-3) h(-1)). Regarding steady states, very similar maximum elimination capacities up to 99 g m(-3) h(-1) were attained. Estimation of the process repeatability, with the paired samples Student's t-test, indicated no statistically significant difference between elimination capacities. Repeatability was also established for several descriptors of the process such as the carbon dioxide and biomass production, the pH and organic content of the leachates, and the moisture content of the packing material. While some parameters, such as the pH, presented a remarkably low divergence between biofilters (coefficient of variability of 1.4%), others, such as the organic content of the leachates, presented higher variability (30.6%) due to an uneven biomass lixiviation associated with stochastic hydrodynamics and biomass repartitions. Regarding process efficiency, it was established that less than 10% of fluctuation is to be expected between the elimination capacities of identical biofilter set-ups. A further statistical comparison between the first halves of the biofilter columns indicated very similar coefficients of variability, confirming the repeatability of the process, for different biofilter lengths.
Hierarchical Synthesis of Coastal Ecosystem Health Indicators at Karimunjawa National Marine Park
NASA Astrophysics Data System (ADS)
Danu Prasetya, Johan; Ambariyanto; Supriharyono; Purwanti, Frida
2018-02-01
The coastal ecosystem of Karimunjawa National Marine Park (KNMP) is facing various pressures, including from human activity. Monitoring the health condition of coastal ecosystems periodically is needed as an evaluation of the ecosystem condition. Systematic and consistent indicators are needed in monitoring of coastal ecosystem health. This paper presents hierarchical synthesis of coastal ecosystem health indicators using Analytic Hierarchy Process (AHP) method. Hierarchical synthesis is obtained from process of weighting by paired comparison based on expert judgments. The variables of coastal ecosystem health indicators in this synthesis consist of 3 level of variable, i.e. main variable, sub-variable and operational variable. As a result of assessment, coastal ecosystem health indicators consist of 3 main variables, i.e. State of Ecosystem, Pressure and Management. Main variables State of Ecosystem and Management obtain the same value i.e. 0.400, while Pressure value was 0.200. Each main variable consist of several sub-variable, i.e. coral reef, reef fish, mangrove and seagrass for State of Ecosystem; fisheries and marine tourism activity for Pressure; planning and regulation, institutional and also infrastructure and financing for Management. The highest value of sub-variable of main variable State of Ecosystem, Pressure and Management were coral reef (0.186); marine tourism pressure (0.133) and institutional (0.171), respectively. The highest value of operational variable of main variable State of Ecosystem, Pressure and Management were percent of coral cover (0.058), marine tourism pressure (0.133) and presence of zonation plan, regulation also socialization of monitoring program (0.53), respectively. Potential pressure from marine tourism activity is the variable that most affect the health of the ecosystem. The results of this research suggest that there is a need to develop stronger conservation strategies to facing with pressures from marine tourism activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, A., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr; Song, J.; Hwang, H., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr
In order to obtain reliable multilevel cell (MLC) characteristics, resistance controllability between the different resistance levels is required especially in resistive random access memory (RRAM), which is prone to resistance variability mainly due to its intrinsic random nature of defect generation and filament formation. In this study, we have thoroughly investigated the multilevel resistance variability in a TaO{sub x}-based nanoscale (<30 nm) RRAM operated in MLC mode. It is found that the resistance variability not only depends on the conductive filament size but also is a strong function of oxygen vacancy concentration in it. Based on the gained insights through experimentalmore » observations and simulation, it is suggested that forming thinner but denser conductive filament may greatly improve the temporal resistance variability even at low operation current despite the inherent stochastic nature of resistance switching process.« less
Jayasumana, Channa; Ranasinghe, Omesh; Ranasinghe, Sachini; Siriwardhana, Imalka; Gunatilake, Sarath; Siribaddana, Sisira
2016-11-01
Chronic Interstitial Nephritis in Agricultural Communities (CINAC) causes major morbidity and mortality for farmers in North-Central province (NCP) of Sri Lanka. To prevent the CINAC, reverse osmosis (RO) plants are established to purify the water and reduce the exposure to possible nephrotoxins through drinking water. We assessed RO plant maintenance and efficacy in NCP. We have interviewed 10 RO plant operators on plant establishment, maintenance, usage and funding. We also measured total dissolved solids (TDS in ppm) to assess the efficacy of the RO process. Most RO plants were operated by community-based organizations. They provide clean and sustainable water source for many in the NCP for a nominal fee, which tends to be variable. The RO plant operators carry out RO plant maintenance. However, maintenance procedures and quality management practices tend to vary from an operator to another. RO process itself has the ability to lower the TDS of the water. On average, RO process reduces the TDS to 29 ppm. The RO process reduces the impurities in water available to many individuals within CINAC endemic regions. However, there variation in maintenance, quality management, and day-to-day care between operators can be a cause for concern. This variability can affect the quality of water produced by RO plant, its maintenance cost and lifespan. Thus, uniform regulation and training is needed to reduce cost of maintenance and increase the efficacy of RO plants.
DWPF Melter Off-Gas Flammability Assessment for Sludge Batch 9
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, A. S.
2016-07-11
The slurry feed to the Defense Waste Processing Facility (DWPF) melter contains several organic carbon species that decompose in the cold cap and produce flammable gases that could accumulate in the off-gas system and create potential flammability hazard. To mitigate such a hazard, DWPF has implemented a strategy to impose the Technical Safety Requirement (TSR) limits on all key operating variables affecting off-gas flammability and operate the melter within those limits using both hardwired/software interlocks and administrative controls. The operating variables that are currently being controlled include; (1) total organic carbon (TOC), (2) air purges for combustion and dilution, (3)more » melter vapor space temperature, and (4) feed rate. The safety basis limits for these operating variables are determined using two computer models, 4-stage cold cap and Melter Off-Gas (MOG) dynamics models, under the baseline upset scenario - a surge in off-gas flow due to the inherent cold cap instabilities in the slurry-fed melter.« less
Variable-length analog of Stavskaya process: A new example of misleading simulation
NASA Astrophysics Data System (ADS)
Ramos, A. D.; Silva, F. S. G.; Sousa, C. S.; Toom, A.
2017-05-01
This article presents a new example intended to showcase limitations of computer simulations in the study of random processes with local interaction. For this purpose, we examine a new version of the well-known Stavskaya process, which is a discrete-time analog of the well-known contact processes. Like the bulk of random processes studied till now, the Stavskaya process is constant-length, that is, its components do not appear or disappear in the course of its functioning. The process, which we study here and call Variable Stavskaya, VS, is similar to Stavskaya; it is discrete-time; its states are bi-infinite sequences, whose terms take only two values (denoted here as "minus" and "plus"), and the measure concentrated in the configuration "all pluses" is invariant. However, it is a variable length, which means that its components, also called particles, may appear and disappear under its action. The operator VS is a composition of the following two operators. The first operator, called "birth," depends on a real parameter β; it creates a new component in the state "plus" between every two neighboring components with probability β independently from what happens at other places. The second operator, called "murder," depends on a real parameter α and acts in the following way: whenever a plus is a left neighbor of a minus, this plus disappears (as if murdered by that minus which is its right neighbor) with probability α independently from what happens to other particles. We prove for any α <1 and any β >0 and any initial measure μ that the sequence μ (𝖵𝖲)t (the result of t iterative applications of VS to μ) tends to the measure δ⊕ (concentrated in "all pluses") as t →∞ . Such a behavior is often called ergodic. However, the Monte Carlo simulations and mean-field approximations, which we performed, behaved as if μ (𝖵𝖲)t tended to δ⊕ much slower for some α ,β ,μ than for some others. Based on these numerical results, we conjecture that 𝖵𝖲 has phases, but not in that simple sense as the classical Stavskaya process.
Unique variable polarity plasma arc welding for space shuttle
NASA Technical Reports Server (NTRS)
Schwinghamer, R. J.
1985-01-01
Since the introduction of the Plasma Arc Torch in 1955 and subsequent to the work at Boeing in the 1960's, significant improvements crucial to success have been made in the Variable Polarity Plasma Arc (VPPA) Process at the Marshall Space Flight Center. Several very important advantages to this process are given, and the genesis of PA welding, the genesis of VPPA welding, special equiment requirements, weld property development, results with other aluminum alloys, and the eventual successful VPPA transition to production operations are discussed.
40 CFR 63.1439 - General recordkeeping and reporting provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for 6 hours, then the daily average is the average of the temperature measurements taken during those... operating conditions, considering typical variability of the specific process and combustion, recovery, or... temperature reading of −200 °C on a boiler), and will alert the operator by alarm or other means. The owner or...
Every Equation Tells a Story: Using Equation Dictionaries in Introductory Geophysics
ERIC Educational Resources Information Center
Caplan-Auerbach, Jacqueline
2009-01-01
Many students view equations as a series of variables and operators into which numbers should be plugged rather than as representative of a physical process. To solve a problem they may simply look for an equation with the correct variables and assume it meets their needs, rather than selecting an equation that represents the appropriate physical…
Troubleshooting crude vacuum tower overhead ejector systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, J.R.; Frens, L.L.
1995-03-01
Routinely surveying tower overhead vacuum systems can improve performance and product quality. These vacuum systems normally provide reliable and consistent operation. However, process conditions, supplied utilities, corrosion, erosion and fouling all have an impact on ejector system performance. Refinery vacuum distillation towers use ejector systems to maintain tower top pressure and remove overhead gases. However, as with virtually all refinery equipment, performance may be affected by a number of variables. These variables may act independently or concurrently. It is important to understand basic operating principles of vacuum systems and how performance is affected by: utilities, corrosion and erosion, fouling, andmore » process conditions. Reputable vacuum-system suppliers have service engineers that will come to a refinery to survey the system and troubleshoot performance or offer suggestions for improvement. A skilled vacuum-system engineer may be needed to diagnose and remedy system problems. The affect of these variables on performance is discussed. A case history is described of a vacuum system on a crude tower in a South American refinery.« less
Variability in Rheumatology day care hospitals in Spain: VALORA study.
Hernández Miguel, María Victoria; Martín Martínez, María Auxiliadora; Corominas, Héctor; Sanchez-Piedra, Carlos; Sanmartí, Raimon; Fernandez Martinez, Carmen; García-Vicuña, Rosario
To describe the variability of the day care hospital units (DCHUs) of Rheumatology in Spain, in terms of structural resources and operating processes. Multicenter descriptive study with data from a self-completed questionnaire of DCHUs self-assessment based on DCHUs quality standards of the Spanish Society of Rheumatology. Structural resources and operating processes were analyzed and stratified by hospital complexity (regional, general, major and complex). Variability was determined using the coefficient of variation (CV) of the variable with clinical relevance that presented statistically significant differences when was compared by centers. A total of 89 hospitals (16 autonomous regions and Melilla) were included in the analysis. 11.2% of hospitals are regional, 22,5% general, 27%, major and 39,3% complex. A total of 92% of DCHUs were polyvalent. The number of treatments applied, the coordination between DCHUs and hospital pharmacy and the post graduate training process were the variables that showed statistically significant differences depending on the complexity of hospital. The highest rate of rheumatologic treatments was found in complex hospitals (2.97 per 1,000 population), and the lowest in general hospitals (2.01 per 1,000 population). The CV was 0.88 in major hospitals; 0.86 in regional; 0.76 in general, and 0.72 in the complex. there was variability in the number of treatments delivered in DCHUs, being greater in major hospitals and then in regional centers. Nonetheless, the variability in terms of structure and function does not seem due to differences in center complexity. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
The nature and use of prediction skills in a biological computer simulation
NASA Astrophysics Data System (ADS)
Lavoie, Derrick R.; Good, Ron
The primary goal of this study was to examine the science process skill of prediction using qualitative research methodology. The think-aloud interview, modeled after Ericsson and Simon (1984), let to the identification of 63 program exploration and prediction behaviors.The performance of seven formal and seven concrete operational high-school biology students were videotaped during a three-phase learning sequence on water pollution. Subjects explored the effects of five independent variables on two dependent variables over time using a computer-simulation program. Predictions were made concerning the effect of the independent variables upon dependent variables through time. Subjects were identified according to initial knowledge of the subject matter and success at solving three selected prediction problems.Successful predictors generally had high initial knowledge of the subject matter and were formal operational. Unsuccessful predictors generally had low initial knowledge and were concrete operational. High initial knowledge seemed to be more important to predictive success than stage of Piagetian cognitive development.Successful prediction behaviors involved systematic manipulation of the independent variables, note taking, identification and use of appropriate independent-dependent variable relationships, high interest and motivation, and in general, higher-level thinking skills. Behaviors characteristic of unsuccessful predictors were nonsystematic manipulation of independent variables, lack of motivation and persistence, misconceptions, and the identification and use of inappropriate independent-dependent variable relationships.
Tomperi, Jani; Leiviskä, Kauko
2018-06-01
Traditionally the modelling in an activated sludge process has been based on solely the process measurements, but as the interest to optically monitor wastewater samples to characterize the floc morphology has increased, in the recent years the results of image analyses have been more frequently utilized to predict the characteristics of wastewater. This study shows that the traditional process measurements or the automated optical monitoring variables by themselves are not capable of developing the best predictive models for the treated wastewater quality in a full-scale wastewater treatment plant, but utilizing these variables together the optimal models, which show the level and changes in the treated wastewater quality, are achieved. By this early warning, process operation can be optimized to avoid environmental damages and economic losses. The study also shows that specific optical monitoring variables are important in modelling a certain quality parameter, regardless of the other input variables available.
NASA Astrophysics Data System (ADS)
Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe
2016-08-01
Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.
Gaia DR1 documentation Chapter 6: Variability
NASA Astrophysics Data System (ADS)
Eyer, L.; Rimoldini, L.; Guy, L.; Holl, B.; Clementini, G.; Cuypers, J.; Mowlavi, N.; Lecoeur-Taïbi, I.; De Ridder, J.; Charnas, J.; Nienartowicz, K.
2017-12-01
This chapter describes the photometric variability processing of the Gaia DR1 data. Coordination Unit 7 is responsible for the variability analysis of over a billion celestial sources. In particular the definition, design, development, validation and provision of a software package for the data processing of photometrically variable objects. Data Processing Centre Geneva (DPCG) responsibilities cover all issues related to the computational part of the CU7 analysis. These span: hardware provisioning, including selection, deployment and optimisation of suitable hardware, choosing and developing software architecture, defining data and scientific workflows as well as operational activities such as configuration management, data import, time series reconstruction, storage and processing handling, visualisation and data export. CU7/DPCG is also responsible for interaction with other DPCs and CUs, software and programming training for the CU7 members, scientific software quality control and management of software and data lifecycle. Details about the specific data treatment steps of the Gaia DR1 data products are found in Eyer et al. (2017) and are not repeated here. The variability content of the Gaia DR1 focusses on a subsample of Cepheids and RR Lyrae stars around the South ecliptic pole, showcasing the performance of the Gaia photometry with respect to variable objects.
NASA. Langley Research Center dry powder towpreg system
NASA Technical Reports Server (NTRS)
Baucom, Robert M.; Marchello, Joseph M.
1990-01-01
Dry powder polymer impregnated carbon fiber tows were produced for preform weaving and composite materials molding applications. In the process, fluidized powder is deposited on spread tow bundles and melted on the fibers by radiant heating to adhere the polymer to the fiber. Unit design theory and operating correlations were developed to provide the basis for scale up of the process to commercial operation. Special features of the operation are the pneumatic tow spreader, fluidized bed, resin feeder, and quality control system. Bench scale experiments, at tow speeds up to 50 cm/sec, demonstrated that process variables can be controlled to produce weavable LARC-TPI carbon fiber towpreg. The towpreg made by the dry powder process was formed into unidirectional fiber moldings and was woven and molded into preform material of good quality.
NASA Astrophysics Data System (ADS)
Presti, Giovambattista; Premarini, Claudio; Leuzzi, Martina; Di Blasi, Melina; Squatrito, Valeria
2017-11-01
The operant was conceptualized by Skinner as a class of behaviors which have common effect on the environment and that, as a class can be shown to vary lawfully in their relations to the other environmental variables, namely antecedents and consequences. And Skinner himself underlined the fact that "operant field is the very field purpose of behavior". The operant offers interesting basic and applied characteristic to conceptualize complex behavior as a recursive process of learning. In this paper we will discuss how the operant concept can be applied in the implementation of software oriented to increase cognitive skills in autistic children and provide an example.
Tewa-Tagne, Patrice; Degobert, Ghania; Briançon, Stéphanie; Bordes, Claire; Gauvrit, Jean-Yves; Lanteri, Pierre; Fessi, Hatem
2007-04-01
Spray-drying process was used for the development of dried polymeric nanocapsules. The purpose of this research was to investigate the effects of formulation and process variables on the resulting powder characteristics in order to optimize them. Experimental designs were used in order to estimate the influence of formulation parameters (nanocapsules and silica concentrations) and process variables (inlet temperature, spray-flow air, feed flow rate and drying air flow rate) on spray-dried nanocapsules when using silica as drying auxiliary agent. The interactions among the formulation parameters and process variables were also studied. Responses analyzed for computing these effects and interactions were outlet temperature, moisture content, operation yield, particles size, and particulate density. Additional qualitative responses (particles morphology, powder behavior) were also considered. Nanocapsules and silica concentrations were the main factors influencing the yield, particulate density and particle size. In addition, they were concerned for the only significant interactions occurring among two different variables. None of the studied variables had major effect on the moisture content while the interaction between nanocapsules and silica in the feed was of first interest and determinant for both the qualitative and quantitative responses. The particles morphology depended on the feed formulation but was unaffected by the process conditions. This study demonstrated that drying nanocapsules using silica as auxiliary agent by spray drying process enables the obtaining of dried micronic particle size. The optimization of the process and the formulation variables resulted in a considerable improvement of product yield while minimizing the moisture content.
Small Interactive Image Processing System (SMIPS) system description
NASA Technical Reports Server (NTRS)
Moik, J. G.
1973-01-01
The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.
Comparative Effects of Antihistamines on Aircrew Mission Effectiveness under Sustained Operations
1992-06-01
measures consist mainly of process measures. Process measures are measures of activities used to accomplish the mission and produce the final results...They include task completion times and response variability, and information processing rates as they relate to unique task assignment. Performance...contains process measures that assess the Individual contributions of hardware/software and human components to overall system performance. Measures
The ASAC Flight Segment and Network Cost Models
NASA Technical Reports Server (NTRS)
Kaplan, Bruce J.; Lee, David A.; Retina, Nusrat; Wingrove, Earl R., III; Malone, Brett; Hall, Stephen G.; Houser, Scott A.
1997-01-01
To assist NASA in identifying research art, with the greatest potential for improving the air transportation system, two models were developed as part of its Aviation System Analysis Capability (ASAC). The ASAC Flight Segment Cost Model (FSCM) is used to predict aircraft trajectories, resource consumption, and variable operating costs for one or more flight segments. The Network Cost Model can either summarize the costs for a network of flight segments processed by the FSCM or can be used to independently estimate the variable operating costs of flying a fleet of equipment given the number of departures and average flight stage lengths.
Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.
1996-01-01
This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.
Renard, P; Van Breusegem, V; Nguyen, M T; Naveau, H; Nyns, E J
1991-10-20
An adaptive control algorithm has been implemented on a biomethanation process to maintain propionate concentration, a stable variable, at a given low value, by steering the dilution rate. It was thereby expected to ensure the stability of the process during the startup and during steady-state running with an acceptable performance. The methane pilot reactor was operated in the completely mixed, once-through mode and computer-controlled during 161 days. The results yielded the real-life validation of the adaptive control algorithm, and documented the stability and acceptable performance expected.
Development of a heavy duty portable variable power supply (HPVPS)
NASA Astrophysics Data System (ADS)
Musa, Ahmad Zulfadli Bin; Lung, Chong Man; Abidin, Wan'Amirah Basyarah Binti Zainol
2017-08-01
This paper covers the innovation of a Heavy Duty Portable Variable Power Supply (HPVPS) in Jabatan Kejuruteraan Elektrik (JKE), Politeknik Mukah, Sarawak (PMU). This project consists of variable power supply which can vary the output from 1.2 V to 11.6V, AC pure wave inverter to convert DC to AC for the operation of low power home appliances and also used Li-on rechargeable batteries to store the electrical energy and additional feature that can be used to jump-start the batteries of the car. The main objective of this project is to make the user can operate the electronic devices anywhere whenever if no electricity while doing their lab activities. Most of the regulated power supply in JKE lab aged 9-10 years old and need periodical maintenance and need cost and also the unit can be used is not enough to support the whole class during lab activities. As a result, the P&P process will be facing the major problem in order to make the lab activities running smoothly. By development of the portable variable power supply, the P&P process is more efficient and very helpful.
Operational excellence (six sigma) philosophy: Application to software quality assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lackner, M.
1997-11-01
This report contains viewgraphs on operational excellence philosophy of six sigma applied to software quality assurance. This report outlines the following: goal of six sigma; six sigma tools; manufacturing vs administrative processes; Software quality assurance document inspections; map software quality assurance requirements document; failure mode effects analysis for requirements document; measuring the right response variables; and questions.
40 CFR 63.121 - Storage vessel provisions-alternative means of emission limitation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer Operations, and... account for other emission variables such as temperature and barometric pressure, or (2) An engineering...
40 CFR 63.121 - Storage vessel provisions-alternative means of emission limitation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer Operations, and... account for other emission variables such as temperature and barometric pressure, or (2) An engineering...
40 CFR 63.121 - Storage vessel provisions-alternative means of emission limitation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer Operations, and... account for other emission variables such as temperature and barometric pressure, or (2) An engineering...
40 CFR 63.121 - Storage vessel provisions-alternative means of emission limitation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer Operations, and... account for other emission variables such as temperature and barometric pressure, or (2) An engineering...
40 CFR 63.121 - Storage vessel provisions-alternative means of emission limitation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer Operations, and... account for other emission variables such as temperature and barometric pressure, or (2) An engineering...
Research in Stochastic Processes.
1983-10-01
increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound
M-DAS: System for multispectral data analysis. [in Saginaw Bay, Michigan
NASA Technical Reports Server (NTRS)
Johnson, R. H.
1975-01-01
M-DAS is a ground data processing system designed for analysis of multispectral data. M-DAS operates on multispectral data from LANDSAT, S-192, M2S and other sources in CCT form. Interactive training by operator-investigators using a variable cursor on a color display was used to derive optimum processing coefficients and data on cluster separability. An advanced multivariate normal-maximum likelihood processing algorithm was used to produce output in various formats: color-coded film images, geometrically corrected map overlays, moving displays of scene sections, coverage tabulations and categorized CCTs. The analysis procedure for M-DAS involves three phases: (1) screening and training, (2) analysis of training data to compute performance predictions and processing coefficients, and (3) processing of multichannel input data into categorized results. Typical M-DAS applications involve iteration between each of these phases. A series of photographs of the M-DAS display are used to illustrate M-DAS operation.
Data Processing Aspects of MEDLARS
Austin, Charles J.
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287
DATA PROCESSING ASPECTS OF MEDLARS.
AUSTIN, C J
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.
Using wavelets to decompose the time frequency effects of monetary policy
NASA Astrophysics Data System (ADS)
Aguiar-Conraria, Luís; Azevedo, Nuno; Soares, Maria Joana
2008-05-01
Central banks have different objectives in the short and long run. Governments operate simultaneously at different timescales. Many economic processes are the result of the actions of several agents, who have different term objectives. Therefore, a macroeconomic time series is a combination of components operating on different frequencies. Several questions about economic time series are connected to the understanding of the behavior of key variables at different frequencies over time, but this type of information is difficult to uncover using pure time-domain or pure frequency-domain methods. To our knowledge, for the first time in an economic setup, we use cross-wavelet tools to show that the relation between monetary policy variables and macroeconomic variables has changed and evolved with time. These changes are not homogeneous across the different frequencies.
Static and Dynamic Aeroelastic Tailoring With Variable Camber Control
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2016-01-01
This paper examines the use of a Variable Camber Continuous Trailing Edge Flap (VCCTEF) system for aeroservoelastic optimization of a transport wingbox. The quasisteady and unsteady motions of the flap system are utilized as design variables, along with patch-level structural variables, towards minimizing wingbox weight via maneuver load alleviation and active flutter suppression. The resulting system is, in general, very successful at removing structural weight in a feasible manner. Limitations to this success are imposed by including load cases where the VCCTEF system is not active (open-loop) in the optimization process, and also by including actuator operating cost constraints.
An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth
NASA Astrophysics Data System (ADS)
Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge
2017-01-01
A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.
Nájera, S; Gil-Martínez, M; Zambrano, J A
2015-01-01
The aim of this paper is to establish and quantify different operational goals and control strategies in autothermal thermophilic aerobic digestion (ATAD). This technology appears as an alternative to conventional sludge digestion systems. During the batch-mode reaction, high temperatures promote sludge stabilization and pasteurization. The digester temperature is usually the only online, robust, measurable variable. The average temperature can be regulated by manipulating both the air injection and the sludge retention time. An improved performance of diverse biochemical variables can be achieved through proper manipulation of these inputs. However, a better quality of treated sludge usually implies major operating costs or a lower production rate. Thus, quality, production and cost indices are defined to quantify the outcomes of the treatment. Based on these, tradeoff control strategies are proposed and illustrated through some examples. This paper's results are relevant to guide plant operators, to design automatic control systems and to compare or evaluate the control performance on ATAD systems.
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Ratcliff, Roger; McKoon, Gail
2012-01-01
We tested two explanations for why the slope of the z-transformed receiver operating characteristic (zROC) is less than 1 in recognition memory: the unequal-variance account (target evidence is more variable than lure evidence) and the dual-process account (responding reflects both a continuous familiarity process and a threshold recollection…
Ito, Vanessa Mayumi; Batistella, César Benedito; Maciel, Maria Regina Wolf; Maciel Filho, Rubens
2007-04-01
Soybean oil deodorized distillate is a product derived from the refining process and it is rich in high value-added products. The recovery of these unsaponifiable fractions is of great commercial interest, because of the fact that in many cases, the "valuable products" have vitamin activities such as tocopherols (vitamin E), as well as anticarcinogenic properties such as sterols. Molecular distillation has large potential to be used in order to concentrate tocopherols, as it uses very low temperatures owing to the high vacuum and short operating time for separation, and also, it does not use solvents. Then, it can be used to separate and to purify thermosensitive material such as vitamins. In this work, the molecular distillation process was applied for tocopherol concentration, and the response surface methodology was used to optimize free fatty acids (FFA) elimination and tocopherol concentration in the residue and in the distillate streams, both of which are the products of the molecular distiller. The independent variables studied were feed flow rate (F) and evaporator temperature (T) because they are the very important process variables according to previous experience. The experimental range was 4-12 mL/min for F and 130-200 degrees C for T. It can be noted that feed flow rate and evaporator temperature are important operating variables in the FFA elimination. For decreasing the loss of FFA, in the residue stream, the operating range should be changed, increasing the evaporator temperature and decreasing the feed flow rate; D/F ratio increases, increasing evaporator temperature and decreasing feed flow rate. High concentration of tocopherols was obtained in the residue stream at low values of feed flow rate and high evaporator temperature. These results were obtained through experimental results based on experimental design.
Karichappan, Thirugnanasambandham; Venkatachalam, Sivakumar; Jeganathan, Prakash Maran
2014-01-10
Discharge of grey wastewater into the ecological system causes the negative impact effect on receiving water bodies. In this present study, electrocoagulation process (EC) was investigated to treat grey wastewater under different operating conditions such as initial pH (4-8), current density (10-30 mA/cm2), electrode distance (4-6 cm) and electrolysis time (5-25 min) by using stainless steel (SS) anode in batch mode. Four factors with five levels Box-Behnken response surface design (BBD) was employed to optimize and investigate the effect of process variables on the responses such as total solids (TS), chemical oxygen demand (COD) and fecal coliform (FC) removal. The process variables showed significant effect on the electrocoagulation treatment process. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to study the electrocoagulation process statistically. The optimal operating conditions were found to be: initial pH of 7, current density of 20 mA/cm2, electrode distance of 5 cm and electrolysis time of 20 min. These results indicated that EC process can be scale up in large scale level to treat grey wastewater with high removal efficiency of TS, COD and FC.
NASA Astrophysics Data System (ADS)
Chung, T. W.; Chen, C. K.; Hsu, S. H.
2017-11-01
Protein concentration process using filter membrane has a significant advantage on energy saving compared to the traditional drying processes. However, fouling on large membrane area and frequent membrane cleaning will increase the energy consumption and operation cost for the protein concentration process with filter membrane. In this study, the membrane filtration for protein concentration will be conducted and compared with the recent protein concentration technology. The analysis of operating factors for protein concentration process using filter membrane was discussed. The separation mechanism of membrane filtration was developed according to the size difference between the pore of membrane and the particle of filter material. The Darcy’s Law was applied to discuss the interaction on flux, TMP (transmembrane pressure) and resistance in this study. The effect of membrane pore size, pH value and TMP on the steady-state flux (Jst) and protein rejection (R) were studied. It is observed that the Jst increases with decreasing membrane pore size, the Jst increases with increasing TMP, and R increased with decreasing solution pH value. Compare to other variables, the pH value is the most significant variable for separation between protein and water.
Separation of plastics: The importance of kinetics knowledge in the evaluation of froth flotation.
Censori, Matteo; La Marca, Floriana; Carvalho, M Teresa
2016-08-01
Froth flotation is a promising technique to separate polymers of similar density. The present paper shows the need for performing kinetic tests to evaluate and optimize the process. In the experimental study, batch flotation tests were performed on samples of ABS and PS. The floated product was collected at increasing flotation time. Two variables were selected for modification: the concentration of the depressor (tannic acid) and airflow rate. The former is associated with the chemistry of the process and the latter with the transport of particles. It was shown that, like mineral flotation, plastics flotation can be adequately assumed as a first order rate process. The results of the kinetic tests showed that the kinetic parameters change with the operating conditions. When the depressing action is weak and the airflow rate is low, the kinetic is fast. Otherwise, the kinetic is slow and a variable percentage of the plastics never floats. Concomitantly, the time at which the maximum difference in the recovery of the plastics in the floated product is attained changes with the operating conditions. The prediction of flotation results, process evaluation and comparisons should be done considering the process kinetics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Process concept of retorting of Julia Creek oil shale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitnai, O.
1984-06-01
A process is proposed for the above ground retorting of the Julia Creek oil shale in Queensland. The oil shale characteristics, process description, chemical reactions of the oil shale components, and the effects of variable and operating conditions on process performance are discussed. The process contains a fluidized bed combustor which performs both as a combustor of the spent shales and as a heat carrier generator for the pyrolysis step. 12 references, 5 figures, 5 tables.
Computer simulation of a single pilot flying a modern high-performance helicopter
NASA Technical Reports Server (NTRS)
Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.
1988-01-01
Presented is a computer simulation of a human response pilot model able to execute operational flight maneuvers and vehicle stabilization of a modern high-performance helicopter. Low-order, single-variable, human response mechanisms, integrated to form a multivariable pilot structure, provide a comprehensive operational control over the vehicle. Evaluations of the integrated pilot were performed by direct insertion into a nonlinear, total-force simulation environment provided by NASA Lewis. Comparisons between the integrated pilot structure and single-variable pilot mechanisms are presented. Static and dynamically alterable configurations of the pilot structure are introduced to simulate pilot activities during vehicle maneuvers. These configurations, in conjunction with higher level, decision-making processes, are considered for use where guidance and navigational procedures, operational mode transfers, and resource sharing are required.
NASA Astrophysics Data System (ADS)
Lucifredi, A.; Mazzieri, C.; Rossi, M.
2000-05-01
Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.
Radiation Belt Storm Probes: Resolving Fundamental Physics with Practical Consequences
NASA Technical Reports Server (NTRS)
Ukhorskiy, Aleksandr Y.; Mauk, Barry H.; Fox, Nicola J.; Sibeck, David G.; Grebowsky, Joseph M.
2011-01-01
The fundamental processes that energize, transport, and cause the loss of charged particles operate throughout the universe at locations as diverse as magnetized planets, the solar wind, our Sun, and other stars. The same processes operate within our immediate environment, the Earth's radiation belts. The Radiation Belt Storm Probes (RBSP) mission will provide coordinated two-spacecraft observations to obtain understanding of these fundamental processes controlling the dynamic variability of the near-Earth radiation environment. In this paper we discuss some of the profound mysteries of the radiation belt physics that will be addressed by RBSP and briefly describe the mission and its goals.
Reconfigurable pipelined processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saccardi, R.J.
1989-09-19
This patent describes a reconfigurable pipelined processor for processing data. It comprises: a plurality of memory devices for storing bits of data; a plurality of arithmetic units for performing arithmetic functions with the data; cross bar means for connecting the memory devices with the arithmetic units for transferring data therebetween; at least one counter connected with the cross bar means for providing a source of addresses to the memory devices; at least one variable tick delay device connected with each of the memory devices and arithmetic units; and means for providing control bits to the variable tick delay device formore » variably controlling the input and output operations thereof to selectively delay the memory devices and arithmetic units to align the data for processing in a selected sequence.« less
Application of State Analysis and Goal-based Operations to a MER Mission Scenario
NASA Technical Reports Server (NTRS)
Morris, John Richard; Ingham, Michel D.; Mishkin, Andrew H.; Rasmussen, Robert D.; Starbird, Thomas W.
2006-01-01
State Analysis is a model-based systems engineering methodology employing a rigorous discovery process which articulates operations concepts and operability needs as an integrated part of system design. The process produces requirements on system and software design in the form of explicit models which describe the system behavior in terms of state variables and the relationships among them. By applying State Analysis to an actual MER flight mission scenario, this study addresses the specific real world challenges of complex space operations and explores technologies that can be brought to bear on future missions. The paper first describes the tools currently used on a daily basis for MER operations planning and provides an in-depth description of the planning process, in the context of a Martian day's worth of rover engineering activities, resource modeling, flight rules, science observations, and more. It then describes how State Analysis allows for the specification of a corresponding goal-based sequence that accomplishes the same objectives, with several important additional benefits.
Selecting an oxygen plant for a copper smelter modernization
NASA Astrophysics Data System (ADS)
Larson, Kenneth H.; Hutchison, Robert L.
1994-10-01
The selection of an oxygen plant for the Cyprus Miami smelter modernization project began with a good definition of the use requirements and the smelter process variables that can affect oxygen demand. To achieve a reliable supply of oxygen with a reasonable amount of capital, critical equipment items were reviewed and reliability was added through the use of installed spares, purchase of insurance spare parts or the installation of equipment design for 50 percent of the production design such that the plant could operate with one unit while the other unit is being maintained. The operating range of the plant was selected to cover variability in smelter oxygen demand, and it was recognized that the broader operating range sacrificed about two to three percent in plant power consumption. Careful consideration of the plant "design point" was important to both the capital and operating costs of the plant, and a design point was specified that allowed a broad range of operation for maximum flexibility.
A Database Approach for Predicting and Monitoring Baked Anode Properties
NASA Astrophysics Data System (ADS)
Lauzon-Gauthier, Julien; Duchesne, Carl; Tessier, Jayson
2012-11-01
The baked anode quality control strategy currently used by most carbon plants based on testing anode core samples in the laboratory is inadequate for facing increased raw material variability. The low core sampling rate limited by lab capacity and the common practice of reporting averaged properties based on some anode population mask a significant amount of individual anode variability. In addition, lab results are typically available a few weeks after production and the anodes are often already set in the reduction cells preventing early remedial actions when necessary. A database approach is proposed in this work to develop a soft-sensor for predicting individual baked anode properties at the end of baking cycle. A large historical database including raw material properties, process operating parameters and anode core data was collected from a modern Alcoa plant. A multivariate latent variable PLS regression method was used for analyzing the large database and building the soft-sensor model. It is shown that the general low frequency trends in most anode physical and mechanical properties driven by raw material changes are very well captured by the model. Improvements in the data infrastructure (instrumentation, sampling frequency and location) will be necessary for predicting higher frequency variations in individual baked anode properties. This paper also demonstrates how multivariate latent variable models can be interpreted against process knowledge and used for real-time process monitoring of carbon plants, and detection of faults and abnormal operation.
Influence of operating conditions on the air gasification of dry refinery sludge in updraft gasifier
NASA Astrophysics Data System (ADS)
Ahmed, R.; Sinnathambi, C. M.
2013-06-01
In the present work, details of the equilibrium modeling of dry refinery sludge (DRS) are presented using ASPEN PLUS Simulator in updraft gasifier. Due to lack of available information in the open journal on refinery sludge gasification using updraft gasifier, an evaluate for its optimum conditions on gasification is presented in this paper. For this purpose a Taguchi Orthogonal array design, statistical software is applied to find optimum conditions for DRS gasification. The goal is to identify the most significant process variable in DRS gasification conditions. The process variables include; oxidation zone temperature, equivalent ratio, operating pressure will be simulated and examined. Attention was focused on the effect of optimum operating conditions on the gas composition of H2 and CO (desirable) and CO2 (undesirable) in terms of mass fraction. From our results and finding it can be concluded that the syngas (H2 & CO) yield in term of mass fraction favors high oxidation zone temperature and at atmospheric pressure while CO2 acid gas favor at a high level of equivalent ratio as well as air flow rate favoring towards complete combustion.
Bench-Scale Process for Low-Cost Carbon Dioxide (CO2) Capture Using a Phase-Changing Absorbent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westendorf, Tiffany; Caraher, Joel; Chen, Wei
2015-03-31
The objective of this project is to design and build a bench-scale process for a novel phase-changing aminosilicone-based CO2-capture solvent. The project will establish scalability and technical and economic feasibility of using a phase-changing CO2-capture absorbent for post-combustion capture of CO2 from coal-fired power plants with 90% capture efficiency and 95% CO2 purity at a cost of $40/tonne of CO2 captured by 2025 and a cost of <$10/tonne of CO2 captured by 2035. In the first budget period of this project, the bench-scale phase-changing CO2 capture process was designed using data and operating experience generated under a previous project (ARPA-emore » project DE-AR0000084). Sizing and specification of all major unit operations was completed, including detailed process and instrumentation diagrams. The system was designed to operate over a wide range of operating conditions to allow for exploration of the effect of process variables on CO2 capture performance.« less
Behavioral Variability, Learning Processes, and Creativity
1990-09-01
Nursery Schools . ~ 9-10 y.o. subjects, at the concrete operative stage and coming from Primary Schools . - 14-15 y.o. subjects, at the formal thought...stage and coming from General Secondary Schools (no Technical School subject has been considered). - Adults, students at the University. Cognitive...classifications combine in a single situation, the operations of seiation and of classification, as approached in the classical Piaget’s procedures
2016-12-02
Quantum Computing , University of Waterloo, Waterloo ON, N2L 3G1, Canada (Dated: December 1, 2016) Continuous variable (CV) quantum key distribution (QKD...Networking with QUantum operationally-Secure Technology for Maritime Deployment (CONQUEST) Contract Period of Performance: 2 September 2016 – 1 September...this letter or have any other questions. Sincerely, Raytheon BBN Technologies Kathryn Carson Program Manager Quantum Information Processing
Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods
NASA Technical Reports Server (NTRS)
Olds, John R.; Walberg, Gerald D.
1993-01-01
Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.
ORES - Objective Referenced Evaluation in Science.
ERIC Educational Resources Information Center
Shaw, Terry
Science process skills considered important in making decisions and solving problems include: observing, classifying, measuring, using numbers, using space/time relationships, communicating, predicting, inferring, manipulating variables, making operational definitions, forming hypotheses, interpreting data, and experimenting. This 60-item test,…
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
Kronos Observatory Operations Challenges in a Lean Environment
NASA Astrophysics Data System (ADS)
Koratkar, Anuradha; Peterson, Bradley M.; Polidan, Ronald S.
2003-02-01
Kronos is a multiwavelength observatory designed to map the accretion disks and environments of supermassive black holes in various environments using the natural intrinsic variability of the accretion-driven sources. Kronos is envisaged as a Medium Explorer mission to NASA Office of Space Science under the Structure and Evolution of the Universe theme. We will achieve the Kronos science objectives by developing cost-effective techniques for obtaining and assimilating data from the research spacecraft and its subsequent work on the ground. The science operations assumptions for the mission are: (1 Need for flexible scheduling due to the variable nature of targets, (2) Large data volumes but minimal ground station contact, (3) Very small staff for operations. Our first assumption implies that we will have to consider an effective strategy to dynamically reprioritize the observing schedule to maximize science data acquisition. The flexibility we seek greatly increases the science return of the mission, because variability events can be properly captured. Our second assumption implies that we will have to develop some basic on-board analysis strategies to determine which data get downloaded. The small size of the operations staff implies that we need to "automate" as many routine processes of science operations as possible. In this paper we will discuss the various solutions that we are considering to optimize our operations and maximize science returns on the observatory.
CONFIG: Qualitative simulation tool for analyzing behavior of engineering devices
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.; Harris, Richard A.
1987-01-01
To design failure management expert systems, engineers mentally analyze the effects of failures and procedures as they propagate through device configurations. CONFIG is a generic device modeling tool for use in discrete event simulation, to support such analyses. CONFIG permits graphical modeling of device configurations and qualitative specification of local operating modes of device components. Computation requirements are reduced by focussing the level of component description on operating modes and failure modes, and specifying qualitative ranges of variables relative to mode transition boundaries. Simulation processing occurs only when modes change or variables cross qualitative boundaries. Device models are built graphically, using components from libraries. Components are connected at ports by graphical relations that define data flow. The core of a component model is its state transition diagram, which specifies modes of operation and transitions among them.
Vendramel, Simone; Dezotti, Marcia; Sant'Anna, Geraldo L
2011-01-01
Nitrification of wastewaters from chemical industries can pose some challenges due to the presence of inhibitory compounds. Some wastewaters, besides their organic complexity present variable levels of salt concentration. In order to investigate the effect of salt (NaCl) content on the nitrification of a conventional biologically treated industrial wastewater, a bench scale moving-bed biofilm reactor was operated on a sequencing batch mode. The wastewater presenting a chloride content of 0.05 g l(-1) was supplemented with NaCl up to 12 g Cl(-) l(-1). The reactor operation cycle was: filling (5 min), aeration (12 or 24h), settling (5 min) and drawing (5 min). Each experimental run was conducted for 3 to 6 months to address problems related to the inherent wastewater variability and process stabilization. A PLC system assured automatic operation and control of the pertinent process variables. Data obtained from selected batch experiments were adjusted by a kinetic model, which considered ammonia, nitrite and nitrate variations. The average performance results indicated that nitrification efficiency was not influenced by chloride content in the range of 0.05 to 6 g Cl(-) l(-1) and remained around 90%. When the chloride content was 12 g Cl(-) l(-1), a significant drop in the nitrification efficiency was observed, even operating with a reaction period of 24 h. Also, a negative effect of the wastewater organic matter content on nitrification efficiency was observed, which was probably caused by growth of heterotrophs in detriment of autotrophs and nitrification inhibition by residual chemicals.
Cardiac surgery productivity and throughput improvements.
Lehtonen, Juha-Matti; Kujala, Jaakko; Kouri, Juhani; Hippeläinen, Mikko
2007-01-01
The high variability in cardiac surgery length--is one of the main challenges for staff managing productivity. This study aims to evaluate the impact of six interventions on open-heart surgery operating theatre productivity. A discrete operating theatre event simulation model with empirical operation time input data from 2603 patients is used to evaluate the effect that these process interventions have on the surgery output and overtime work. A linear regression model was used to get operation time forecasts for surgery scheduling while it also could be used to explain operation time. A forecasting model based on the linear regression of variables available before the surgery explains 46 per cent operating time variance. The main factors influencing operation length were type of operation, redoing the operation and the head surgeon. Reduction of changeover time between surgeries by inducing anaesthesia outside an operating theatre and by reducing slack time at the end of day after a second surgery have the strongest effects on surgery output and productivity. A more accurate operation time forecast did not have any effect on output, although improved operation time forecast did decrease overtime work. A reduction in the operation time itself is not studied in this article. However, the forecasting model can also be applied to discover which factors are most significant in explaining variation in the length of open-heart surgery. The challenge in scheduling two open-heart surgeries in one day can be partly resolved by increasing the length of the day, decreasing the time between two surgeries or by improving patient scheduling procedures so that two short surgeries can be paired. A linear regression model is created in the paper to increase the accuracy of operation time forecasting and to identify factors that have the most influence on operation time. A simulation model is used to analyse the impact of improved surgical length forecasting and five selected process interventions on productivity in cardiac surgery.
Wang, Hongguang
2018-01-01
Annual power load forecasting is not only the premise of formulating reasonable macro power planning, but also an important guarantee for the safety and economic operation of power system. In view of the characteristics of annual power load forecasting, the grey model of GM (1,1) are widely applied. Introducing buffer operator into GM (1,1) to pre-process the historical annual power load data is an approach to improve the forecasting accuracy. To solve the problem of nonadjustable action intensity of traditional weakening buffer operator, variable-weight weakening buffer operator (VWWBO) and background value optimization (BVO) are used to dynamically pre-process the historical annual power load data and a VWWBO-BVO-based GM (1,1) is proposed. To find the optimal value of variable-weight buffer coefficient and background value weight generating coefficient of the proposed model, grey relational analysis (GRA) and improved gravitational search algorithm (IGSA) are integrated and a GRA-IGSA integration algorithm is constructed aiming to maximize the grey relativity between simulating value sequence and actual value sequence. By the adjustable action intensity of buffer operator, the proposed model optimized by GRA-IGSA integration algorithm can obtain a better forecasting accuracy which is demonstrated by the case studies and can provide an optimized solution for annual power load forecasting. PMID:29768450
AOIPS 3 user's guide. Volume 2: Program descriptions
NASA Technical Reports Server (NTRS)
Schotz, Steve S.; Piper, Thomas S.; Negri, Andrew J.
1990-01-01
The Atmospheric and Oceanographic Information Processing System (AOIPS) 3 is the version of the AOIPS software as of April 1989. The AOIPS software was developed jointly by the Goddard Space Flight Center and General Sciences Corporation. A detailed description of very AOIPS program is presented. It is intended to serve as a reference for such items as program functionality, program operational instructions, and input/output variable descriptions. Program descriptions are derived from the on-line help information. Each program description is divided into two sections. The functional description section describes the purpose of the program and contains any pertinent operational information. The program description sections lists the program variables as they appear on-line, and describes them in detail.
Devices and Systems for Nonlinear Optical Information Processing
1988-11-01
in the VLSI literature [7, 8, 9], in which basic physical principles have been invoked to both understand current VLSI performance and to project...the first time, that in fact accounts for a very wide range of observed but previously unexplained phenomena [Appendix 4; AFOSR Jour. Publ. 7, AFOSR...the variable grating mode liquid crystal device A. R. Tongay. Jr. Abstract. The physical principles of operation of the variable grating mode C. S. Wu
Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard
2017-01-01
The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.
Applying lessons from commercial aviation safety and operations to resuscitation.
Ornato, Joseph P; Peberdy, Mary Ann
2014-02-01
Both commercial aviation and resuscitation are complex activities in which team members must respond to unexpected emergencies in a consistent, high quality manner. Lives are at stake in both activities and the two disciplines have similar leadership structures, standard setting processes, training methods, and operational tools. Commercial aviation crews operate with remarkable consistency and safety, while resuscitation team performance and outcomes are highly variable. This commentary provides the perspective of two physician-pilots showing how commercial aviation training, operations, and safety principles can be adapted to resuscitation team training and performance. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Operations planning simulation: Model study
NASA Technical Reports Server (NTRS)
1974-01-01
The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.
NASA Astrophysics Data System (ADS)
García-Díaz, J. Carlos
2009-11-01
Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.
Song, Ji-Yeon; Oh, Donghoon; Lee, Chang-Ha
2015-07-17
The effects of a malfunctional column on the performance of a simulated moving bed (SMB) process were studied experimentally and theoretically. The experimental results of conventional four-zone SMB (2-2-2-2 configuration) and FeedCol operation (2-2-2-2 configuration with one feed column) with one malfunctional column were compared with simulation results of the corresponding SMB processes with a normal column configuration. The malfunctional column in SMB processes significantly deteriorated raffinate purity. However, the extract purity was equivalent or slightly improved compared with the corresponding normal SMB operation because the complete separation zone of the malfunctional column moved to a lower flow rate range in zones II and III. With the malfunctional column configuration, FeedCol operation gave better experimental performance (up to 7%) than conventional SMB operation because controlling product purity with FeedCol operation was more flexible through the use of two additional operating variables, injection time and injection length. Thus, compared with conventional SMB separation, extract with equivalent or slightly better purity could be produced from FeedCol operation even with a malfunctional column, while minimizing the decrease in raffinate purity (less than 2%). Copyright © 2015 Elsevier B.V. All rights reserved.
2014-01-01
Background Discharge of grey wastewater into the ecological system causes the negative impact effect on receiving water bodies. Methods In this present study, electrocoagulation process (EC) was investigated to treat grey wastewater under different operating conditions such as initial pH (4–8), current density (10–30 mA/cm2), electrode distance (4–6 cm) and electrolysis time (5–25 min) by using stainless steel (SS) anode in batch mode. Four factors with five levels Box-Behnken response surface design (BBD) was employed to optimize and investigate the effect of process variables on the responses such as total solids (TS), chemical oxygen demand (COD) and fecal coliform (FC) removal. Results The process variables showed significant effect on the electrocoagulation treatment process. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to study the electrocoagulation process statistically. The optimal operating conditions were found to be: initial pH of 7, current density of 20 mA/cm2, electrode distance of 5 cm and electrolysis time of 20 min. Conclusion These results indicated that EC process can be scale up in large scale level to treat grey wastewater with high removal efficiency of TS, COD and FC. PMID:24410752
Valorisation of waste tyre by pyrolysis in a moving bed reactor.
Aylón, E; Fernández-Colino, A; Murillo, R; Navarro, M V; García, T; Mastral, A M
2010-07-01
The aim of this work is to assess the behaviour of a moving bed reactor, based on a screw transporter design, in waste tyre pyrolysis under several experimental conditions. Waste tyre represents a significant problem in developed countries and it is necessary to develop new technology that could easily process big amounts of this potentially raw material. In this work, the influence of the main pyrolysis process variables (temperature, solid residence time, mass flow rate and inert gas flow) has been studied by a thorough analysis of product yields and properties. It has been found that regardless the process operational parameters, a total waste tyre devolatilisation is achieved, producing a pyrolytic carbon black with a volatile matter content under 5 wt.%. In addition, it has been proven that, in the range studied, the most influencing process variables are temperature and solid mass flow rate, mainly because both variables modify the gas residence time inside the reactor. In addition, it has been found that the modification of these variables affects to the chemical properties of the products. This fact is mainly associated to the different cracking reaction of the primary pyrolysis products. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Hospital cost structure in the USA: what's behind the costs? A business case.
Chandra, Charu; Kumar, Sameer; Ghildayal, Neha S
2011-01-01
Hospital costs in the USA are a large part of the national GDP. Medical billing and supplies processes are significant and growing contributors to hospital operations costs in the USA. This article aims to identify cost drivers associated with these processes and to suggest improvements to reduce hospital costs. A Monte Carlo simulation model that uses @Risk software facilitates cost analysis and captures variability associated with the medical billing process (administrative) and medical supplies process (variable). The model produces estimated savings for implementing new processes. Significant waste exists across the entire medical supply process that needs to be eliminated. Annual savings, by implementing the improved process, have the potential to save several billion dollars annually in US hospitals. The other analysis in this study is related to hospital billing processes. Increased spending on hospital billing processes is not entirely due to hospital inefficiency. The study lacks concrete data for accurately measuring cost savings, but there is obviously room for improvement in the two US healthcare processes. This article only looks at two specific costs associated with medical supply and medical billing processes, respectively. This study facilitates awareness of escalating US hospital expenditures. Cost categories, namely, fixed, variable and administrative, are presented to identify the greatest areas for improvement. The study will be valuable to US Congress policy makers and US healthcare industry decision makers. Medical billing process, part of a hospital's administrative costs, and hospital supplies management processes are part of variable costs. These are the two major cost drivers of US hospitals' expenditures that were examined and analyzed.
Defining process design space for monoclonal antibody cell culture.
Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A
2010-08-15
The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.
Karamitsos, Theodoros D; Hudsmith, Lucy E; Selvanayagam, Joseph B; Neubauer, Stefan; Francis, Jane M
2007-01-01
Accurate and reproducible measurement of left ventricular (LV) mass and function is a significant strength of Cardiovascular Magnetic Resonance (CMR). Reproducibility and accuracy of these measurements is usually reported between experienced operators. However, an increasing number of inexperienced operators are now training in CMR and are involved in post-processing analysis. The aim of the study was to assess the interobserver variability of the manual planimetry of LV contours amongst two experienced and six inexperienced operators before and after a two months training period. Ten healthy normal volunteers (5 men, mean age 34+/-14 years) comprised the study population. LV volumes, mass, and ejection fraction were manually evaluated using Argus software (Siemens Medical Solutions, Erlangen, Germany) for each subject, once by the two experienced and twice by the six inexperienced operators. The mean values of experienced operators were considered the reference values. The agreement between operators was evaluated by means of Bland-Altman analysis. Training involved standardized data acquisition, simulated off-line analysis and mentoring. The trainee operators demonstrated improvement in the measurement of all the parameters compared to the experienced operators. The mean ejection fraction variability improved from 7.2% before training to 3.7% after training (p=0.03). The parameter in which the trainees showed the least improvement was LV mass (from 7.7% to 6.7% after training). The basal slice selection and contour definition were the main sources of errors. An intensive two month training period significantly improved the accuracy of LV functional measurements. Adequate training of new CMR operators is of paramount importance in our aim to maintain the accuracy and high reproducibility of CMR in LV function analysis.
HYDROLOGIC EVALUATION OF LANDFILL PERFORMANCE (HELP) MODEL - USER'S GUIDE FOR VERSION 3
This report documents the solution methods and process descriptions used in the Version 3 of the HELP model. Program documentation including program options, system and operating requirements, file structures, program structure and variable descriptions are provided in a separat...
Scientific Inquiry: A Model for Online Searching.
ERIC Educational Resources Information Center
Harter, Stephen P.
1984-01-01
Explores scientific inquiry as philosophical and behavioral model for online search specialist and information retrieval process. Nature of scientific research is described and online analogs to research concepts of variable, hypothesis formulation and testing, operational definition, validity, reliability, assumption, and cyclical nature of…
Digital sun sensor multi-spot operation.
Rufino, Giancarlo; Grassi, Michele
2012-11-28
The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.
Auzoult, Laurent; Gangloff, Bernard
2018-04-20
In this study, we analyse the impact of the organizational culture and introduce a new variable, the integration of safety, which relates to the modalities for the implementation and adoption of safety in the work process, either through the activity or by the operator. One hundred and eighty employees replied to a questionnaire measuring the organizational climate, the safety climate and the integration of safety. We expected that implementation centred on the activity or on the operator would mediate the relationship between the organizational culture and the safety climate. The results support our assumptions. A regression analysis highlights the positive impact on the safety climate of organizational values of the 'rule' and 'support' type, as well as of integration by the operator and activity. Moreover, integration mediates the relation between these variables. The results suggest to take into account organizational culture and to introduce different implementation modalities to improve the safety climate.
The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator.
Roh, S D; Kim, S W; Cho, W S
2001-10-01
The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator were accomplished. In the numerical modelling, two models applied to the modelling within the kiln are the combustion chamber model including the mass and energy balance equations for two combustion chambers and 3D thermal model. The combustion chamber model predicts temperature within the kiln, flue gas composition, flux and heat of combustion. Using the combustion chamber model and 3D thermal model, the production-rules for the process simulation can be obtained through interrelation analysis between control and operation variables. The process simulation of the kiln is operated with the production-rules for automatic operation. The process simulation aims to provide fundamental solutions to the problems in incineration process by introducing an online expert control system to provide an integrity in process control and management. Knowledge-based expert control systems use symbolic logic and heuristic rules to find solutions for various types of problems. It was implemented to be a hybrid intelligent expert control system by mutually connecting with the process control systems which has the capability of process diagnosis, analysis and control.
Liu, Peide; Li, Dengfeng
2017-01-01
Muirhead mean (MM) is a well-known aggregation operator which can consider interrelationships among any number of arguments assigned by a variable vector. Besides, it is a universal operator since it can contain other general operators by assigning some special parameter values. However, the MM can only process the crisp numbers. Inspired by the MM' advantages, the aim of this paper is to extend MM to process the intuitionistic fuzzy numbers (IFNs) and then to solve the multi-attribute group decision making (MAGDM) problems. Firstly, we develop some intuitionistic fuzzy Muirhead mean (IFMM) operators by extending MM to intuitionistic fuzzy information. Then, we prove some properties and discuss some special cases with respect to the parameter vector. Moreover, we present two new methods to deal with MAGDM problems with the intuitionistic fuzzy information based on the proposed MM operators. Finally, we verify the validity and reliability of our methods by using an application example, and analyze the advantages of our methods by comparing with other existing methods.
Saini, Parmesh K; Marks, Harry M; Dreyfuss, Moshe S; Evans, Peter; Cook, L Victor; Dessai, Uday
2011-08-01
Measuring commonly occurring, nonpathogenic organisms on poultry products may be used for designing statistical process control systems that could result in reductions of pathogen levels. The extent of pathogen level reduction that could be obtained from actions resulting from monitoring these measurements over time depends upon the degree of understanding cause-effect relationships between processing variables, selected output variables, and pathogens. For such measurements to be effective for controlling or improving processing to some capability level within the statistical process control context, sufficiently frequent measurements would be needed to help identify processing deficiencies. Ultimately the correct balance of sampling and resources is determined by those characteristics of deficient processing that are important to identify. We recommend strategies that emphasize flexibility, depending upon sampling objectives. Coupling the measurement of levels of indicator organisms with practical emerging technologies and suitable on-site platforms that decrease the time between sample collections and interpreting results would enhance monitoring process control.
Analysis of the temperature of the hot tool in the cut of woven fabric using infrared images
NASA Astrophysics Data System (ADS)
Borelli, Joao E.; Verderio, Leonardo A.; Gonzaga, Adilson; Ruffino, Rosalvo T.
2001-03-01
Textile manufacture occupies a prominence place in the national economy. By virtue of its importance researches have been made on the development of new materials, equipment and methods used in the production process. The cutting of textiles starts in the basic stage, to be followed within the process of the making of clothes and other articles. In the hot cutting of fabric, one of the variables of great importance in the control of the process is the contact temperature between the tool and the fabric. The work presents a technique for the measurement of the temperature based on the processing of infrared images. For this a system was developed composed of an infrared camera, a framegrabber PC board and software that analyzes the punctual temperature in the cut area enabling the operator to achieve the necessary control of the other variables involved in the process.
Expert system for testing industrial processes and determining sensor status
Gross, K.C.; Singer, R.M.
1998-06-02
A method and system are disclosed for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 24 figs.
Expert system for testing industrial processes and determining sensor status
Gross, Kenneth C.; Singer, Ralph M.
1998-01-01
A method and system for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.
Farhadi, Sajjad; Aminzadeh, Behnoush; Torabian, Ali; Khatibikamal, Vahid; Alizadeh Fard, Mohammad
2012-06-15
This work makes a comparison between electrocoagulation (EC), photoelectrocoagulation, peroxi-electrocoagulation and peroxi-photoelectrocoagulation processes to investigate the removal of chemical oxygen demand (COD) from pharmaceutical wastewater. The effects of operational parameters such as initial pH, current density, applied voltage, amount of hydrogen peroxide and electrolysis time on COD removal efficiency were investigated and the optimum operating range for each of these operating variables was experimentally determined. In electrocoagulation process, the optimum values of pH and voltage were determined to be 7 and 40 V, respectively. Desired pH and hydrogen peroxide concentration in the Fenton-based processes were found to be 3 and 300 mg/L, respectively. The amounts of COD, pH, electrical conductivity, temperature and total dissolved solids (TDS) were on-line monitored. Results indicated that under the optimum operating range for each process, the COD removal efficiency was in order of peroxi-electrocoagulation > peroxi-photoelectrocoagulation > photoelectrocoagulation>electrocoagulation. Finally, a kinetic study was carried out using the linear pseudo-second-order model and results showed that the pseudo-second-order equation provided the best correlation for the COD removal rate. Copyright © 2012 Elsevier B.V. All rights reserved.
Versteeg, Roelof J; Few, Douglas A; Kinoshita, Robert A; Johnson, Doug; Linda, Ondrej
2015-02-24
Methods, computer readable media, and apparatuses provide robotic explosive hazard detection. A robot intelligence kernel (RIK) includes a dynamic autonomy structure with two or more autonomy levels between operator intervention and robot initiative A mine sensor and processing module (ESPM) operating separately from the RIK perceives environmental variables indicative of a mine using subsurface perceptors. The ESPM processes mine information to determine a likelihood of a presence of a mine. A robot can autonomously modify behavior responsive to an indication of a detected mine. The behavior is modified between detection of mines, detailed scanning and characterization of the mine, developing mine indication parameters, and resuming detection. Real time messages are passed between the RIK and the ESPM. A combination of ESPM bound messages and RIK bound messages cause the robot platform to switch between modes including a calibration mode, the mine detection mode, and the mine characterization mode.
Versteeg, Roelof J.; Few, Douglas A.; Kinoshita, Robert A.; Johnson, Douglas; Linda, Ondrej
2015-12-15
Methods, computer readable media, and apparatuses provide robotic explosive hazard detection. A robot intelligence kernel (RIK) includes a dynamic autonomy structure with two or more autonomy levels between operator intervention and robot initiative A mine sensor and processing module (ESPM) operating separately from the RIK perceives environmental variables indicative of a mine using subsurface perceptors. The ESPM processes mine information to determine a likelihood of a presence of a mine. A robot can autonomously modify behavior responsive to an indication of a detected mine. The behavior is modified between detection of mines, detailed scanning and characterization of the mine, developing mine indication parameters, and resuming detection. Real time messages are passed between the RIK and the ESPM. A combination of ESPM bound messages and RIK bound messages cause the robot platform to switch between modes including a calibration mode, the mine detection mode, and the mine characterization mode.
Control Design for an Advanced Geared Turbofan Engine
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Litt, Jonathan S.
2017-01-01
This paper describes the design process for the control system of an advanced geared turbofan engine. This process is applied to a simulation that is representative of a 30,000 pound-force thrust class concept engine with two main spools, ultra-high bypass ratio, and a variable area fan nozzle. Control system requirements constrain the non-linear engine model as it operates throughout its flight envelope of sea level to 40,000 feet and from 0 to 0.8 Mach. The purpose of this paper is to review the engine control design process for an advanced turbofan engine configuration. The control architecture selected for this project was developed from literature and reflects a configuration that utilizes a proportional integral controller with sets of limiters that enable the engine to operate safely throughout its flight envelope. Simulation results show the overall system meets performance requirements without exceeding operational limits.
Operational Planning for Multiple Heterogeneous Unmanned Aerial Vehicles in Three Dimensions
2009-06-01
human input in the planning process. Two solution methods are presented: (1) a mixed-integer program, and (2) an algorithm that utilizes a metaheuristic ...and (2) an algorithm that utilizes a metaheuristic to generate composite variables for a linear program, called the Composite Operations Planning...that represent a path and an associated type of UAV. The reformulation is incorporated into an algorithm that uses a metaheuristic to generate the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flory, John Andrew; Padilla, Denise D.; Gauthier, John H.
Upcoming weapon programs require an aggressive increase in Application Specific Integrated Circuit (ASIC) production at Sandia National Laboratories (SNL). SNL has developed unique modeling and optimization tools that have been instrumental in improving ASIC production productivity and efficiency, identifying optimal operational and tactical execution plans under resource constraints, and providing confidence in successful mission execution. With ten products and unprecedented levels of demand, a single set of shared resources, highly variable processes, and the need for external supplier task synchronization, scheduling is an integral part of successful manufacturing. The scheduler uses an iterative multi-objective genetic algorithm and a multi-dimensional performancemore » evaluator. Schedule feasibility is assessed using a discrete event simulation (DES) that incorporates operational uncertainty, variability, and resource availability. The tools provide rapid scenario assessments and responses to variances in the operational environment, and have been used to inform major equipment investments and workforce planning decisions in multiple SNL facilities.« less
Computer modeling of thermoelectric generator performance
NASA Technical Reports Server (NTRS)
Chmielewski, A. B.; Shields, V.
1982-01-01
Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.
Statistical process control applied to mechanized peanut sowing as a function of soil texture.
Zerbato, Cristiano; Furlani, Carlos Eduardo Angeli; Ormond, Antonio Tassio Santana; Gírio, Lucas Augusto da Silva; Carneiro, Franciele Morlin; da Silva, Rouverson Pereira
2017-01-01
The successful establishment of agricultural crops depends on sowing quality, machinery performance, soil type and conditions, among other factors. This study evaluates the operational quality of mechanized peanut sowing in three soil types (sand, silt, and clay) with variable moisture contents. The experiment was conducted in three locations in the state of São Paulo, Brazil. The track-sampling scheme was used for 80 sampling locations of each soil type. Descriptive statistics and statistical process control (SPC) were used to evaluate the quality indicators of mechanized peanut sowing. The variables had normal distributions and were stable from the viewpoint of SPC. The best performance for peanut sowing density, normal spacing, and the initial seedling growing stand was found for clayey soil followed by sandy soil and then silty soil. Sandy or clayey soils displayed similar results regarding sowing depth, which was deeper than in the silty soil. Overall, the texture and the moisture of clayey soil provided the best operational performance for mechanized peanut sowing.
Pagés-Díaz, Jhosané; Pereda-Reyes, Ileana; Sanz, Jose Luis; Lundin, Magnus; Taherzadeh, Mohammad J; Horváth, Ilona Sárvári
2018-02-01
The use of consecutive feeding was applied to investigate the response of the microbial biomass to a second addition of substrates in terms of biodegradation using batch tests as a promising alternative to predict the behavior of the process. Anaerobic digestion (AD) of the slaughterhouse waste (SB) and its co-digestion with manure (M), various crops (VC), and municipal solid waste were evaluated. The results were then correlated to previous findings obtained by the authors for similar mixtures in batch and semi-continuous operation modes. AD of the SB failed showing total inhibition after a second feeding. Co-digestion of the SB+M showed a significant improvement for all of the response variables investigated after the second feeding, while co-digestion of the SB+VC resulted in a decline in all of these response variables. Similar patterns were previously detected, during both the batch and the semi-continuous modes. Copyright © 2017. Published by Elsevier B.V.
Statistical process control applied to mechanized peanut sowing as a function of soil texture
Furlani, Carlos Eduardo Angeli; da Silva, Rouverson Pereira
2017-01-01
The successful establishment of agricultural crops depends on sowing quality, machinery performance, soil type and conditions, among other factors. This study evaluates the operational quality of mechanized peanut sowing in three soil types (sand, silt, and clay) with variable moisture contents. The experiment was conducted in three locations in the state of São Paulo, Brazil. The track-sampling scheme was used for 80 sampling locations of each soil type. Descriptive statistics and statistical process control (SPC) were used to evaluate the quality indicators of mechanized peanut sowing. The variables had normal distributions and were stable from the viewpoint of SPC. The best performance for peanut sowing density, normal spacing, and the initial seedling growing stand was found for clayey soil followed by sandy soil and then silty soil. Sandy or clayey soils displayed similar results regarding sowing depth, which was deeper than in the silty soil. Overall, the texture and the moisture of clayey soil provided the best operational performance for mechanized peanut sowing. PMID:28742095
CHAMP (Camera, Handlens, and Microscope Probe)
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.
Uncertainty relation for the discrete Fourier transform.
Massar, Serge; Spindel, Philippe
2008-05-16
We derive an uncertainty relation for two unitary operators which obey a commutation relation of the form UV=e(i phi) VU. Its most important application is to constrain how much a quantum state can be localized simultaneously in two mutually unbiased bases related by a discrete fourier transform. It provides an uncertainty relation which smoothly interpolates between the well-known cases of the Pauli operators in two dimensions and the continuous variables position and momentum. This work also provides an uncertainty relation for modular variables, and could find applications in signal processing. In the finite dimensional case the minimum uncertainty states, discrete analogues of coherent and squeezed states, are minimum energy solutions of Harper's equation, a discrete version of the harmonic oscillator equation.
Evolution of catalytic RNA in the laboratory
NASA Technical Reports Server (NTRS)
Joyce, Gerald F.
1992-01-01
We are interested in the biochemistry of existing RNA enzymes and in the development of RNA enzymes with novel catalytic function. The focal point of our research program has been the design and operation of a laboratory system for the controlled evolution of catalytic RNA. This system serves as working model of RNA-based life and can be used to explore the catalytic potential of RNA. Evolution requires the integration of three chemical processes: amplification, mutation, and selection. Amplification results in additional copies of the genetic material. Mutation operates at the level of genotype to introduce variability, this variability in turn being expressed as a range of phenotypes. Selection operates at the level of phenotype to reduce variability by excluding those individuals that do not conform to the prevailing fitness criteria. These three processes must be linked so that only the selected individuals are amplified, subject to mutational error, to produce a progeny distribution of mutant individuals. We devised techniques for the amplification, mutation, and selection of catalytic RNA, all of which can be performed rapidly in vitro within a single reaction vessel. We integrated these techniques in such a way that they can be performed iteratively and routinely. This allowed us to conduct evolution experiments in response to artificially-imposed selection constraints. Our objective was to develop novel RNA enzymes by altering the selection constraints in a controlled manner. In this way we were able to expand the catalytic repertoire of RNA. Our long-range objective is to develop an RNA enzyme with RNA replicase activity. If such an enzyme had the ability to produce additional copies of itself, then RNA evolution would operate autonomously and the origin of life will have been realized in the laboratory.
Integrating Solar PV in Utility System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, A.; Botterud, A.; Wu, J.
2013-10-31
This study develops a systematic framework for estimating the increase in operating costs due to uncertainty and variability in renewable resources, uses the framework to quantify the integration costs associated with sub-hourly solar power variability and uncertainty, and shows how changes in system operations may affect these costs. Toward this end, we present a statistical method for estimating the required balancing reserves to maintain system reliability along with a model for commitment and dispatch of the portfolio of thermal and renewable resources at different stages of system operations. We estimate the costs of sub-hourly solar variability, short-term forecast errors, andmore » day-ahead (DA) forecast errors as the difference in production costs between a case with “realistic” PV (i.e., subhourly solar variability and uncertainty are fully included in the modeling) and a case with “well behaved” PV (i.e., PV is assumed to have no sub-hourly variability and can be perfectly forecasted). In addition, we highlight current practices that allow utilities to compensate for the issues encountered at the sub-hourly time frame with increased levels of PV penetration. In this analysis we use the analytical framework to simulate utility operations with increasing deployment of PV in a case study of Arizona Public Service Company (APS), a utility in the southwestern United States. In our analysis, we focus on three processes that are important in understanding the management of PV variability and uncertainty in power system operations. First, we represent the decisions made the day before the operating day through a DA commitment model that relies on imperfect DA forecasts of load and wind as well as PV generation. Second, we represent the decisions made by schedulers in the operating day through hour-ahead (HA) scheduling. Peaking units can be committed or decommitted in the HA schedules and online units can be redispatched using forecasts that are improved relative to DA forecasts, but still imperfect. Finally, we represent decisions within the operating hour by schedulers and transmission system operators as real-time (RT) balancing. We simulate the DA and HA scheduling processes with a detailed unit-commitment (UC) and economic dispatch (ED) optimization model. This model creates a least-cost dispatch and commitment plan for the conventional generating units using forecasts and reserve requirements as inputs. We consider only the generation units and load of the utility in this analysis; we do not consider opportunities to trade power with neighboring utilities. We also do not consider provision of reserves from renewables or from demand-side options. We estimate dynamic reserve requirements in order to meet reliability requirements in the RT operations, considering the uncertainty and variability in load, solar PV, and wind resources. Balancing reserve requirements are based on the 2.5th and 97.5th percentile of 1-min deviations from the HA schedule in a previous year. We then simulate RT deployment of balancing reserves using a separate minute-by-minute simulation of deviations from the HA schedules in the operating year. In the simulations we assume that balancing reserves can be fully deployed in 10 min. The minute-by-minute deviations account for HA forecasting errors and the actual variability of the load, wind, and solar generation. Using these minute-by-minute deviations and deployment of balancing reserves, we evaluate the impact of PV on system reliability through the calculation of the standard reliability metric called Control Performance Standard 2 (CPS2). Broadly speaking, the CPS2 score measures the percentage of 10-min periods in which a balancing area is able to balance supply and demand within a specific threshold. Compliance with the North American Electric Reliability Corporation (NERC) reliability standards requires that the CPS2 score must exceed 90% (i.e., the balancing area must maintain adequate balance for 90% of the 10-min periods). The combination of representing DA forecast errors in the DA commitments, using 1-min PV data to simulate RT balancing, and estimates of reliability performance through the CPS2 metric, all factors that are important to operating systems with increasing amounts of PV, makes this study unique in its scope.« less
Mercury Deposition Network Site Operator Training for the System Blank and Blind Audit Programs
Wetherbee, Gregory A.; Lehmann, Christopher M.B.
2008-01-01
The U.S. Geological Survey operates the external quality assurance project for the National Atmospheric Deposition Program/Mercury Deposition Network. The project includes the system blank and blind audit programs for assessment of total mercury concentration data quality for wet-deposition samples. This presentation was prepared to train new site operators and to refresh experienced site operators to successfully process and submit system blank and blind audit samples for chemical analysis. Analytical results are used to estimate chemical stability and contamination levels of National Atmospheric Deposition Program/Mercury Deposition Network samples and to evaluate laboratory variability and bias.
Assay optimisation and technology transfer for multi-site immuno-monitoring in vaccine trials
Harris, Stephanie A.; Satti, Iman; Bryan, Donna; Walker, K. Barry; Dockrell, Hazel M.; McShane, Helen; Ho, Mei Mei
2017-01-01
Cellular immunological assays are important tools for the monitoring of responses to T-cell-inducing vaccine candidates. As these bioassays are often technically complex and require considerable experience, careful technology transfer between laboratories is critical if high quality, reproducible data that allows comparison between sites, is to be generated. The aim of this study, funded by the European Union Framework Program 7-funded TRANSVAC project, was to optimise Standard Operating Procedures and the technology transfer process to maximise the reproducibility of three bioassays for interferon-gamma responses: enzyme-linked immunosorbent assay (ELISA), ex-vivo enzyme-linked immunospot and intracellular cytokine staining. We found that the initial variability in results generated across three different laboratories reduced following a combination of Standard Operating Procedure harmonisation and the undertaking of side-by-side training sessions in which assay operators performed each assay in the presence of an assay ‘lead’ operator. Mean inter-site coefficients of variance reduced following this training session when compared with the pre-training values, most notably for the ELISA assay. There was a trend for increased inter-site variability at lower response magnitudes for the ELISA and intracellular cytokine staining assays. In conclusion, we recommend that on-site operator training is an essential component of the assay technology transfer process and combined with harmonised Standard Operating Procedures will improve the quality, reproducibility and comparability of data produced across different laboratories. These data may be helpful in ongoing discussions of the potential risk/benefit of centralised immunological assay strategies for large clinical trials versus decentralised units. PMID:29020010
Students’ Mathematical Literacy in Solving PISA Problems Based on Keirsey Personality Theory
NASA Astrophysics Data System (ADS)
Masriyah; Firmansyah, M. H.
2018-01-01
This research is descriptive-qualitative research. The purpose is to describe students’ mathematical literacy in solving PISA on space and shape content based on Keirsey personality theory. The subjects are four junior high school students grade eight with guardian, artisan, rational or idealist personality. Data collecting methods used test and interview. Data of Keirsey Personality test, PISA test, and interview were analysed. Profile of mathematical literacy of each subject are described as follows. In formulating, guardian subject identified mathematical aspects are formula of rectangle area and sides length; significant variables are terms/conditions in problem and formula of ever encountered question; translated into mathematical language those are measurement and arithmetic operations. In employing, he devised and implemented strategies using ease of calculation on area-subtraction principle; declared truth of result but the reason was less correct; didn’t use and switch between different representations. In interpreting, he declared result as area of house floor; declared reasonableness according measurement estimation. In formulating, artisan subject identified mathematical aspects are plane and sides length; significant variables are solution procedure on both of daily problem and ever encountered question; translated into mathematical language those are measurement, variables, and arithmetic operations as well as symbol representation. In employing, he devised and implemented strategies using two design comparison; declared truth of result without reason; used symbol representation only. In interpreting, he expressed result as floor area of house; declared reasonableness according measurement estimation. In formulating, rational subject identified mathematical aspects are scale and sides length; significant variables are solution strategy on ever encountered question; translated into mathematical language those are measurement, variable, arithmetic operation as well as symbol and graphic representation. In employing, he devised and implemented strategies using additional plane forming on area-subtraction principle; declared truth of result according calculation process; used and switched between symbol and graphic representation. In interpreting, he declared result as house area within terrace and wall; declared reasonableness according measurement estimation. In formulating, idealist subject identified mathematical aspects are sides length; significant variables are terms/condition in problem; translated into mathematical language those are measurement, variables, arithmetic operations as well as symbol and graphic representation. In employing, he devised and implemented strategies using trial and error and two design in process of finding solutions; declared truth of result according the use of two design of solution; used and switched between symbol and graphic representation. In interpreting, he declared result as floor area of house; declared reasonableness according measurement estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaya Shankar Tumuluru
2011-08-01
Effect of process variables on the quality attributes of briquettes from wheat, oat, canola and barley straw Jaya Shankar Tumuluru*, L. G. Tabil, Y. Song, K. L. Iroba and V. Meda Biomass is a renewable energy source and environmentally friendly substitute for fossil fuels such as coal and petroleum products. Major limitation of biomass for successful energy application is its low bulk density, which makes it very difficult and costly to transport and handle. To overcome this limitation, biomass has to be densified. The commonly used technologies for densification of biomass are pelletization and briquetting. Briquetting offers many advantages atmore » it can densify larger particles sizes of biomass at higher moisture contents. Briquetting is influenced by a number of feedstock and process variables such as moisture content, particle size distribution, and some operating variables such as temperature and densification pressure. In the present study, experiments were designed and conducted based on Box-Behnken design to produce briquettes using barley, wheat, canola and barley straws. A laboratory scale hydraulic briquette press was used for the present study. The experimental process variables and their levels used in the present study were pressure levels (7.5, 10, 12.5 MPa), three levels of temperature (90, 110, 130 C), at three moisture content levels (9, 12, 15% w.b.), and three levels of particle size (19.1, 25.04, 31.75 mm). The quality variables studied includes moisture content, initial density and final briquette density after two weeks of storage, size distribution index and durability. The raw biomass was initially chopped and size reduced using a hammer mill. The ground biomass was conditioned at different moisture contents and was further densified using laboratory hydraulic press. For each treatment combination, ten briquettes were manufactured at a residence time of about 30 s after compression pressure setpoint was achieved. After compression, the initial dimensions and the final dimensions after 2 weeks of storage in controlled environment of all the samples were measured. Durability, dimensional stability, and moisture content tests were conducted after two weeks of storage of the briquettes produced. Initial results indicated that moisture content played a significant role on briquettes durability, stability, and density. Low moisture content of the straws (7-12%) gave more durable briquettes. Briquette density increased with increasing pressure depending on the moisture content value. The axial expansion was more significant than the lateral expansion, which in some cases tended to be nil depending on the material and operating variables. Further data analysis is in progress in order to understand the significance of the process variables based on ANOVA. Regression models were developed to predict the changes in quality of briquettes with respect of the process variables under study. Keywords: Herbaceous biomass, densification, briquettes, density, durability, dimensional stability, ANOVA and regression equations« less
Control Design for an Advanced Geared Turbofan Engine
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Litt, Jonathan S.
2017-01-01
This paper describes the design process for the control system of an advanced geared turbofan engine. This process is applied to a simulation that is representative of a 30,000 lbf thrust class concept engine with two main spools, ultra-high bypass ratio, and a variable area fan nozzle. Control system requirements constrain the non-linear engine model as it operates throughout its flight envelope of sea level to 40,000 ft and from 0 to 0.8 Mach. The control architecture selected for this project was developed from literature and reflects a configuration that utilizes a proportional integral controller integrated with sets of limiters that enable the engine to operate safely throughout its flight envelope. Simulation results show the overall system meets performance requirements without exceeding system operational limits.
NASA Technical Reports Server (NTRS)
Trefny, Charles J (Inventor); Dippold, Vance F (Inventor)
2013-01-01
A new dual-mode ramjet combustor used for operation over a wide flight Mach number range is described. Subsonic combustion mode is usable to lower flight Mach numbers than current dual-mode scramjets. High speed mode is characterized by supersonic combustion in a free-jet that traverses the subsonic combustion chamber to a variable nozzle throat. Although a variable combustor exit aperture is required, the need for fuel staging to accommodate the combustion process is eliminated. Local heating from shock-boundary-layer interactions on combustor walls is also eliminated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pochan, M.J.; Massey, M.J.
1979-02-01
This report discusses the results of actual raw product gas sampling efforts and includes: Rationale for raw product gas sampling efforts; design and operation of the CMU gas sampling train; development and analysis of a sampling train data base; and conclusions and future application of results. The results of sampling activities at the CO/sub 2/-Acceptor and Hygas pilot plants proved that: The CMU gas sampling train is a valid instrument for characterization of environmental parameters in coal gasification gas-phase process streams; depending on the particular process configuration, the CMU gas sampling train can reduce gasifier effluent characterization activity to amore » single location in the raw product gas line; and in contrast to the slower operation of the EPA SASS Train, CMU's gas sampling train can collect representative effluent data at a rapid rate (approx. 2 points per hour) consistent with the rate of change of process variables, and thus function as a tool for process engineering-oriented analysis of environmental characteristics.« less
Analysis And Control System For Automated Welding
NASA Technical Reports Server (NTRS)
Powell, Bradley W.; Burroughs, Ivan A.; Kennedy, Larry Z.; Rodgers, Michael H.; Goode, K. Wayne
1994-01-01
Automated variable-polarity plasma arc (VPPA) welding apparatus operates under electronic supervision by welding analysis and control system. System performs all major monitoring and controlling functions. It acquires, analyzes, and displays weld-quality data in real time and adjusts process parameters accordingly. Also records pertinent data for use in post-weld analysis and documentation of quality. System includes optoelectronic sensors and data processors that provide feedback control of welding process.
Guvenc, Senem Yazici; Okut, Yusuf; Ozak, Mert; Haktanir, Birsu; Bilgili, Mehmet Sinan
2017-02-01
In this study, process parameters in chemical oxygen demand (COD) and turbidity removal from metal working industry (MWI) wastewater were optimized by electrocoagulation (EC) using aluminum, iron and steel electrodes. The effects of process variables on COD and turbidity were investigated by developing a mathematical model using central composite design method, which is one of the response surface methodologies. Variance analysis was conducted to identify the interaction between process variables and model responses and the optimum conditions for the COD and turbidity removal. Second-order regression models were developed via the Statgraphics Centurion XVI.I software program to predict COD and turbidity removal efficiencies. Under the optimum conditions, removal efficiencies obtained from aluminum electrodes were found to be 76.72% for COD and 99.97% for turbidity, while the removal efficiencies obtained from iron electrodes were found to be 76.55% for COD and 99.9% for turbidity and the removal efficiencies obtained from steel electrodes were found to be 65.75% for COD and 99.25% for turbidity. Operational costs at optimum conditions were found to be 4.83, 1.91 and 2.91 €/m 3 for aluminum, iron and steel electrodes, respectively. Iron electrode was found to be more suitable for MWI wastewater treatment in terms of operational cost and treatment efficiency.
An Application of Six Sigma to Reduce Supplier Quality Cost
NASA Astrophysics Data System (ADS)
Gaikwad, Lokpriya Mohanrao; Teli, Shivagond Nagappa; Majali, Vijay Shashikant; Bhushi, Umesh Mahadevappa
2016-01-01
This article presents an application of Six Sigma to reduce supplier quality cost in manufacturing industry. Although there is a wider acceptance of Six Sigma in many organizations today, there is still a lack of in-depth case study of Six Sigma. For the present research the case study methodology was used. The company decided to reduce quality cost and improve selected processes using Six Sigma methodologies. Regarding the fact that there is a lack of case studies dealing with Six Sigma especially in individual manufacturing organization this article could be of great importance also for the practitioners. This paper discusses the quality and productivity improvement in a supplier enterprise through a case study. The paper deals with an application of Six Sigma define-measure-analyze-improve-control methodology in an industry which provides a framework to identify, quantify and eliminate sources of variation in an operational process in question, to optimize the operation variables, improve and sustain performance viz. process yield with well-executed control plans. Six Sigma improves the process performance (process yield) of the critical operational process, leading to better utilization of resources, decreases variations and maintains consistent quality of the process output.
Roy, Kevin; Undey, Cenk; Mistretta, Thomas; Naugle, Gregory; Sodhi, Manbir
2014-01-01
Multivariate statistical process monitoring (MSPM) is becoming increasingly utilized to further enhance process monitoring in the biopharmaceutical industry. MSPM can play a critical role when there are many measurements and these measurements are highly correlated, as is typical for many biopharmaceutical operations. Specifically, for processes such as cleaning-in-place (CIP) and steaming-in-place (SIP, also known as sterilization-in-place), control systems typically oversee the execution of the cycles, and verification of the outcome is based on offline assays. These offline assays add to delays and corrective actions may require additional setup times. Moreover, this conventional approach does not take interactive effects of process variables into account and cycle optimization opportunities as well as salient trends in the process may be missed. Therefore, more proactive and holistic online continued verification approaches are desirable. This article demonstrates the application of real-time MSPM to processes such as CIP and SIP with industrial examples. The proposed approach has significant potential for facilitating enhanced continuous verification, improved process understanding, abnormal situation detection, and predictive monitoring, as applied to CIP and SIP operations. © 2014 American Institute of Chemical Engineers.
1987-06-01
shared variables. This will be discussed later. One procedure merits special attention. CheckAndCommit(m, g ,): INTEGER is called by process P, (I...denotes the local process) to check that "valid" communications can take place between P, using guard g , and Pm (m denotes the remote process). If so, P...local guard gi. By matching we mean gj contains an 1/O operation with P. By compatible we mean g , and gj do not both contain input (output) commands
Advanced Ceramic Technology for Space Applications at NASA MSFC
NASA Technical Reports Server (NTRS)
Alim, Mohammad A.
2003-01-01
The ceramic processing technology using conventional methods is applied to the making of the state-of-the-art ceramics known as smart ceramics or intelligent ceramics or electroceramics. The sol-gel and wet chemical processing routes are excluded in this investigation considering economic aspect and proportionate benefit of the resulting product. The use of ceramic ingredients in making coatings or devices employing vacuum coating unit is also excluded in this investigation. Based on the present information it is anticipated that the conventional processing methods provide identical performing ceramics when compared to that processed by the chemical routes. This is possible when sintering temperature, heating and cooling ramps, peak temperature (sintering temperature), soak-time (hold-time), etc. are considered as variable parameters. In addition, optional calcination step prior to the sintering operation remains as a vital variable parameter. These variable parameters constitute a sintering profile to obtain a sintered product. Also it is possible to obtain identical products for more than one sintering profile attributing to the calcination step in conjunction with the variables of the sintering profile. Overall, the state-of-the-art ceramic technology is evaluated for potential thermal and electrical insulation coatings, microelectronics and integrated circuits, discrete and integrated devices, etc. applications in the space program.
Modeling the Spatial Dynamics of International Tuna Fleets
2016-01-01
We developed an iterative sequential random utility model to investigate the social and environmental determinants of the spatiotemporal decision process of tuna purse-seine fishery fishing effort in the eastern Pacific Ocean. Operations of the fishing gear mark checkpoints in a continuous complex decision-making process. Individual fisher behavior is modeled by identifying diversified choices over decision-space for an entire fishing trip, which allows inclusion of prior and current vessel locations and conditions among the explanatory variables. Among these factors are vessel capacity; departure and arrival port; duration of the fishing trip; daily and cumulative distance travelled, which provides a proxy for operation costs; expected revenue; oceanographic conditions; and tons of fish on board. The model uses a two-step decision process to capture the probability of a vessel choosing a specific fishing region for the first set and the probability of switching to (or staying in) a specific region to fish before returning to its landing port. The model provides a means to anticipate the success of marine resource management, and it can be used to evaluate fleet diversity in fisher behavior, the impact of climate variability, and the stability and resilience of complex coupled human and natural systems. PMID:27537545
NASA Technical Reports Server (NTRS)
Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.
2013-01-01
In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.
NASA Technical Reports Server (NTRS)
Wind, G.; DaSilva, A. M.; Norris, P. M.; Platnick, S.
2013-01-01
In this paper we describe a general procedure for calculating synthetic sensor radiances from variable output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint, the algorithm takes explicit account of the model subgrid variability, in particular its description of the probability density function of total water (vapor and cloud condensate.) The simulated sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies.We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products). We focus on clouds because they are very important to model development and improvement.
Method of operating a thermoelectric generator
Reynolds, Michael G; Cowgill, Joshua D
2013-11-05
A method for operating a thermoelectric generator supplying a variable-load component includes commanding the variable-load component to operate at a first output and determining a first load current and a first load voltage to the variable-load component while operating at the commanded first output. The method also includes commanding the variable-load component to operate at a second output and determining a second load current and a second load voltage to the variable-load component while operating at the commanded second output. The method includes calculating a maximum power output of the thermoelectric generator from the determined first load current and voltage and the determined second load current and voltage, and commanding the variable-load component to operate at a third output. The commanded third output is configured to draw the calculated maximum power output from the thermoelectric generator.
Evaluation of an attributive measurement system in the automotive industry
NASA Astrophysics Data System (ADS)
Simion, C.
2016-08-01
Measurement System Analysis (MSA) is a critical component for any quality improvement process. MSA is defined as an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability and it falls into two categories: attribute and variable. Most problematic measurement system issues come from measuring attribute data, which are usually the result of human judgment (visual inspection). Because attributive measurement systems are often used in some manufacturing processes, their assessment is important to obtain the confidence in the inspection process, to see where are the problems in order to eliminate them and to guide the process improvement. It was the aim of this paper to address such a issue presenting a case study made in a local company from the Sibiu region supplying products for the automotive industry, specifically the bag (a technical textile component, i.e. the fabric) for the airbag module. Because defects are inherent in every manufacturing process and in the field of airbag systems a minor defect can influence their performance and lives depend on the safety feature, there is a stringent visual inspection required on the defects of the bag material. The purpose of this attribute MSA was: to determine if all inspectors use the same criteria to determine “pass” from “fail” product (i.e. the fabric); to assess company inspection standards against customer's requirements; to determine how well inspectors are conforming to themselves; to identify how inspectors are conforming to a “known master,” which includes: how often operators ship defective product, how often operators dispose of acceptable product; to discover areas where training is required, procedures must be developed and standards are not available. The results were analyzed using MINITAB software with its module called Attribute Agreement Analysis. The conclusion was that the inspection process must be improved by operator training, developing visual aids/boundary samples, establishing standards and set-up procedures.
Optimization of a thermal hydrolysis process for sludge pre-treatment.
Sapkaite, I; Barrado, E; Fdz-Polanco, F; Pérez-Elvira, S I
2017-05-01
At industrial scale, thermal hydrolysis is the most used process to enhance biodegradability of the sludge produced in wastewater treatment plants. Through statistically guided Box-Behnken experimental design, the present study analyses the effect of TH as pre-treatment applied to activated sludge. The selected process variables were temperature (130-180 °C), time (5-50 min) and decompression mode (slow or steam-explosion effect), and the parameters evaluated were sludge solubilisation and methane production by anaerobic digestion. A quadratic polynomial model was generated to compare the process performance for the 15 different combinations of operation conditions by modifying the process variables evaluated. The statistical analysis performed exhibited that methane production and solubility were significantly affected by pre-treatment time and temperature. During high intensity pre-treatment (high temperature and long times), the solubility increased sharply while the methane production exhibited the opposite behaviour, indicating the formation of some soluble but non-biodegradable materials. Therefore, solubilisation is not a reliable parameter to quantify the efficiency of a thermal hydrolysis pre-treatment, since it is not directly related to methane production. Based on the operational parameters optimization, the estimated optimal thermal hydrolysis conditions to enhance of sewage sludge digestion were: 140-170 °C heating temperature, 5-35min residence time, and one sudden decompression. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dynamics of Postcombustion CO2 Capture Plants: Modeling, Validation, and Case Study
2017-01-01
The capture of CO2 from power plant flue gases provides an opportunity to mitigate emissions that are harmful to the global climate. While the process of CO2 capture using an aqueous amine solution is well-known from experience in other technical sectors (e.g., acid gas removal in the gas processing industry), its operation combined with a power plant still needs investigation because in this case, the interaction with power plants that are increasingly operated dynamically poses control challenges. This article presents the dynamic modeling of CO2 capture plants followed by a detailed validation using transient measurements recorded from the pilot plant operated at the Maasvlakte power station in the Netherlands. The model predictions are in good agreement with the experimental data related to the transient changes of the main process variables such as flow rate, CO2 concentrations, temperatures, and solvent loading. The validated model was used to study the effects of fast power plant transients on the capture plant operation. A relevant result of this work is that an integrated CO2 capture plant might enable more dynamic operation of retrofitted fossil fuel power plants because the large amount of steam needed by the capture process can be diverted rapidly to and from the power plant. PMID:28413256
Scientific Method K-6, Elementary Science Unit No. 3.
ERIC Educational Resources Information Center
Khouri, John W.
Contained in this unit are activities designed to develop science process skills for Grades K through 6. A chart shows how the activities for each grade relate to the operations of classifying, inferring, observing, predicting, interpreting data, estimating, measuring, using numbers, experimenting, and controlling variables. Each activity is…
Women's Schooling, Patterns of Fertility, and Child Survival.
ERIC Educational Resources Information Center
LeVine, Robert
1987-01-01
Expansion of women's schooling is associated with lower fertility and child mortality. This article provides demographic evidence and a framework for discovering how educational processes operate on maternal behavior. Findings from a study in Mexico focus on mother-infant interaction and social attitudes as important variables. Research needs are…
Environmental Uncertainty and Communication Network Complexity: A Cross-System, Cross-Cultural Test.
ERIC Educational Resources Information Center
Danowski, James
An infographic model is proposed to account for the operation of systems within their information environments. Infographics is a communication paradigm used to indicate the clustering of information processing variables in communication systems. Four propositions concerning environmental uncertainty and internal communication network complexity,…
Quantum anonymous voting with unweighted continuous-variable graph states
NASA Astrophysics Data System (ADS)
Guo, Ying; Feng, Yanyan; Zeng, Guihua
2016-08-01
Motivated by the revealing topological structures of continuous-variable graph state (CVGS), we investigate the design of quantum voting scheme, which has serious advantages over the conventional ones in terms of efficiency and graphicness. Three phases are included, i.e., the preparing phase, the voting phase and the counting phase, together with three parties, i.e., the voters, the tallyman and the ballot agency. Two major voting operations are performed on the yielded CVGS in the voting process, namely the local rotation transformation and the displacement operation. The voting information is carried by the CVGS established before hand, whose persistent entanglement is deployed to keep the privacy of votes and the anonymity of legal voters. For practical applications, two CVGS-based quantum ballots, i.e., comparative ballot and anonymous survey, are specially designed, followed by the extended ballot schemes for the binary-valued and multi-valued ballots under some constraints for the voting design. Security is ensured by entanglement of the CVGS, the voting operations and the laws of quantum mechanics. The proposed schemes can be implemented using the standard off-the-shelf components when compared to discrete-variable quantum voting schemes attributing to the characteristics of the CV-based quantum cryptography.
Wetherbee, Gregory A.; Martin, RoseAnn
2017-02-06
The U.S. Geological Survey Branch of Quality Systems operates the Precipitation Chemistry Quality Assurance Project (PCQA) for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) and National Atmospheric Deposition Program/Mercury Deposition Network (NADP/MDN). Since 1978, various programs have been implemented by the PCQA to estimate data variability and bias contributed by changing protocols, equipment, and sample submission schemes within NADP networks. These programs independently measure the field and laboratory components which contribute to the overall variability of NADP wet-deposition chemistry and precipitation depth measurements. The PCQA evaluates the quality of analyte-specific chemical analyses from the two, currently (2016) contracted NADP laboratories, Central Analytical Laboratory and Mercury Analytical Laboratory, by comparing laboratory performance among participating national and international laboratories. Sample contamination and stability are evaluated for NTN and MDN by using externally field-processed blank samples provided by the Branch of Quality Systems. A colocated sampler program evaluates the overall variability of NTN measurements and bias between dissimilar precipitation gages and sample collectors.This report documents historical PCQA operations and general procedures for each of the external quality-assurance programs from 2007 to 2016.
Largoni, Martina; Facco, Pierantonio; Bernini, Donatella; Bezzo, Fabrizio; Barolo, Massimiliano
2015-10-10
Monitoring batch bioreactors is a complex task, due to the fact that several sources of variability can affect a running batch and impact on the final product quality. Additionally, the product quality itself may not be measurable on line, but requires sampling and lab analysis taking several days to be completed. In this study we show that, by using appropriate process analytical technology tools, the operation of an industrial batch bioreactor used in avian vaccine manufacturing can be effectively monitored as the batch progresses. Multivariate statistical models are built from historical databases of batches already completed, and they are used to enable the real time identification of the variability sources, to reliably predict the final product quality, and to improve process understanding, paving the way to a reduction of final product rejections, as well as to a reduction of the product cycle time. It is also shown that the product quality "builds up" mainly during the first half of a batch, suggesting on the one side that reducing the variability during this period is crucial, and on the other side that the batch length can possibly be shortened. Overall, the study demonstrates that, by using a Quality-by-Design approach centered on the appropriate use of mathematical modeling, quality can indeed be built "by design" into the final product, whereas the role of end-point product testing can progressively reduce its importance in product manufacturing. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lievana, A.; Ladah, L. B.; Lavin, M. F.; Filonov, A. E.; Tapia, F. J.; Leichter, J.; Valencia Gasti, J. A.
2016-02-01
Physical transport processes, such as nonlinear internal waves, operating within the coastal ocean of Baja California, Mexico, are diverse, variable and operate on a variety of temporal and spatial scales. Understanding the influence of nonlinear internal waves, in part responsible for the exchange of water properties between coastal and offshore environments, on the structure of intertidal communities is important for the generation of working ecological models. The relationship between the supply of ecological subsidies associated with physical transport processes that operate on relatively short spatial and temporal scales, such as the internal tide, and intertidal community structure must be understood as processes that operate on distinct spatial and temporal scales may be prone to react uniquely as the climate changes. We designed an experiment to quantify recruitment and adult survivorship of Chthamalus sp. whose settlement was associated with internal wave activity in the nearby ocean and found that the number of settlers was a robust predictor of the number of adults observed, indicating that post-settlement processes such as competition and predation are not likely to significantly affect the structure of the intertidal barnacle community resulting from internal-wave forced settlement.
Complexity associated with the optimisation of capability options in military operations
NASA Astrophysics Data System (ADS)
Pincombe, A.; Bender, A.; Allen, G.
2005-12-01
In the context of a military operation, even if the intended actions, the geographic location, and the capabilities of the opposition are known, there are still some critical uncertainties that could have a major impact on the effectiveness of a given set of capabilities. These uncertainties include unpredictable events and the response alternatives that are available to the command and control elements of the capability set. They greatly complicate any a priori mathematical description. In a forecasting approach, the most likely future might be chosen and a solution sought that is optimal for that case. With scenario analysis, futures are proposed on the basis of critical uncertainties and the option that is most robust is chosen. We use scenario analysis but our approach is different in that we focus on the complexity and use the coupling between scenarios and options to create information on ideal options. The approach makes use of both soft and hard operations research methods, with subject matter expertise being used to define plausible responses to scenarios. In each scenario, uncertainty affects only a subset of the system-inherent variables and the variables that describe system-environment interactions. It is this scenario-specific reduction of variables that makes the problem mathematically tractable. The process we define is significantly different to existing scenario analysis processes, so we have named it adversarial scenario analysis. It can be used in conjunction with other methods, including recent improvements to the scenario analysis process. To illustrate the approach, we undertake a tactical level scenario analysis for a logistics problem that is defined by a network, expected throughputs to end users, the transport capacity available, the infrastructure at the nodes and the capacities of roads, stocks etc. The throughput capacity, e.g. the effectiveness, of the system relies on all of these variables and on the couplings between them. The system is initially in equilibrium for a given level of demand. However, different, and simpler, solutions emerge as the balance of couplings and the importance of variables change. The scenarios describe such changes in conditions. For each scenario it was possible to define measures that describe the differences between options. As with agent-based distillations, the solution is essentially qualitative and exploratory, bringing awareness of possible future difficulties and of the capabilities that are necessary if we are to deal successfully with those difficulties.
Wood, Tamara M.; Fuhrer, Gregory J.; Morace, Jennifer L.
1996-01-01
Based on the analysis of data that they have been collecting for several years, the Klamath Tribes recently recommended that the Bureau of Reclamation (Reclamation) modify the operating plan for the dam to make the minimum lake levels for the June-August period more closely resemble pre-dam conditions (Jacob Kann, written commun., 1995). The U.S. Geological Survey (USGS) was asked to analyze the available data for the lake and to assess whether the evidence exists to conclude that year-to-year differences in certain lake water-quality variables are related to year-to-year differences in lake level. The results of the analysis will be used as scientific input in the process of developing an operating plan for the Link River Dam.
Phase change water processing for Space Station
NASA Technical Reports Server (NTRS)
Zdankiewicz, E. M.; Price, D. F.
1985-01-01
The use of a vapor compression distillation subsystem (VCDS) for water recovery on the Space Station is analyzed. The self-contained automated system can process waste water at a rate of 32.6 kg/day and requires only 115 W of electric power. The improvements in the mechanical components of VCDS are studied. The operation of VCDS in the normal mode is examined. The VCDS preprototype is evaluated based on water quality, water production rate, and specific energy. The relation between water production rate and fluids pump speed is investigated; it is concluded that a variable speed fluids pump will optimize water production. Components development and testing currently being conducted are described. The properties and operation of the proposed phase change water processing system for the Space Station, based on vapor compression distillation, are examined.
NASA Astrophysics Data System (ADS)
Kelber, C.; Marke, S.; Trommler, U.; Rupprecht, C.; Weis, S.
2017-03-01
Thermal spraying processes are becoming increasingly important in high-technology areas, such as automotive engineering and medical technology. The method offers the advantage of a local layer application with different materials and high deposition rates. Challenges in the application of thermal spraying result from the complex interaction of different influencing variables, which can be attributed to the properties of different materials, operating equipment supply, electrical parameters, flow mechanics, plasma physics and automation. In addition, spraying systems are subject to constant wear. Due to the process specification and the high demands on the produced coatings, innovative quality assurance tools are necessary. A central aspect, which has not yet been considered, is the data management in relation to the present measured variables, in particular the spraying system, the handling system, working safety devices and additional measuring sensors. Both the recording of all process-characterizing variables, their linking and evaluation as well as the use of the data for the active process control presuppose a novel, innovative control system (hardware and software) that was to be developed within the scope of the research project. In addition, new measurement methods and sensors are to be developed and qualified in order to improve the process reliability of thermal spraying.
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
Features of electric drive sucker rod pumps for oil production
NASA Astrophysics Data System (ADS)
Gizatullin, F. A.; Khakimyanov, M. I.; Khusainov, F. F.
2018-01-01
This article is about modes of operation of electric drives of downhole sucker rod pumps. Downhole oil production processes are very energy intensive. Oil fields contain many oil wells; many of them operate in inefficient modes with significant additional losses. Authors propose technical solutions to improve energy performance of a pump unit drives: counterweight balancing, reducing of electric motor power, replacing induction motors with permanent magnet motors, replacing balancer drives with chain drives, using of variable frequency drives.
1996-12-01
This includes an exemption from publishing the opportunity in the Commerce Business Daily ( CBD ) and elimination of the requirement to hold the...of assigned programs. In discharging this responsibility, thc 990 coor.int-tes his efforts with ether ASN(RDMA) offices, b. TEFlO..M-VAL OPERATIONS...Communications, Computers and Information Systems CA Civil Affairs CAIV Cost as an Independent Variable CBD Commerce Business Daily CBPL Capabilities
Transmitter experiment package for the communications technology satellite
NASA Technical Reports Server (NTRS)
Farber, B.; Goldin, D. S.; Marcus, B.; Mock, P.
1977-01-01
The operating requirements, system design characteristics, high voltage packaging considerations, nonstandard components development, and test results for the transmitter experiment package (TEP) are described. The TEP is used for broadcasting power transmission from the Communications Technology Satellite. The TEP consists of a 12 GHz, 200-watt output stage tube (OST), a high voltage processing system that converts the unregulated spacecraft solar array power to the regulated voltages required for OST operation, and a variable conductance heat pipe system that is used to cool the OST body.
Multibeam sonar backscatter data processing
NASA Astrophysics Data System (ADS)
Schimel, Alexandre C. G.; Beaudoin, Jonathan; Parnum, Iain M.; Le Bas, Tim; Schmidt, Val; Keith, Gordon; Ierodiaconou, Daniel
2018-06-01
Multibeam sonar systems now routinely record seafloor backscatter data, which are processed into backscatter mosaics and angular responses, both of which can assist in identifying seafloor types and morphology. Those data products are obtained from the multibeam sonar raw data files through a sequence of data processing stages that follows a basic plan, but the implementation of which varies greatly between sonar systems and software. In this article, we provide a comprehensive review of this backscatter data processing chain, with a focus on the variability in the possible implementation of each processing stage. Our objective for undertaking this task is twofold: (1) to provide an overview of backscatter data processing for the consideration of the general user and (2) to provide suggestions to multibeam sonar manufacturers, software providers and the operators of these systems and software for eventually reducing the lack of control, uncertainty and variability associated with current data processing implementations and the resulting backscatter data products. One such suggestion is the adoption of a nomenclature for increasingly refined levels of processing, akin to the nomenclature adopted for satellite remote-sensing data deliverables.
Lilienthal, S.; Klein, M.; Orbach, R.; Willner, I.; Remacle, F.
2017-01-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series. PMID:28507669
Autonomous Control of Space Nuclear Reactors
NASA Technical Reports Server (NTRS)
Merk, John
2013-01-01
Nuclear reactors to support future robotic and manned missions impose new and innovative technological requirements for their control and protection instrumentation. Long-duration surface missions necessitate reliable autonomous operation, and manned missions impose added requirements for failsafe reactor protection. There is a need for an advanced instrumentation and control system for space-nuclear reactors that addresses both aspects of autonomous operation and safety. The Reactor Instrumentation and Control System (RICS) consists of two functionally independent systems: the Reactor Protection System (RPS) and the Supervision and Control System (SCS). Through these two systems, the RICS both supervises and controls a nuclear reactor during normal operational states, as well as monitors the operation of the reactor and, upon sensing a system anomaly, automatically takes the appropriate actions to prevent an unsafe or potentially unsafe condition from occurring. The RPS encompasses all electrical and mechanical devices and circuitry, from sensors to actuation device output terminals. The SCS contains a comprehensive data acquisition system to measure continuously different groups of variables consisting of primary measurement elements, transmitters, or conditioning modules. These reactor control variables can be categorized into two groups: those directly related to the behavior of the core (known as nuclear variables) and those related to secondary systems (known as process variables). Reliable closed-loop reactor control is achieved by processing the acquired variables and actuating the appropriate device drivers to maintain the reactor in a safe operating state. The SCS must prevent a deviation from the reactor nominal conditions by managing limitation functions in order to avoid RPS actions. The RICS has four identical redundancies that comply with physical separation, electrical isolation, and functional independence. This architecture complies with the safety requirements of a nuclear reactor and provides high availability to the host system. The RICS is intended to interface with a host computer (the computer of the spacecraft where the reactor is mounted). The RICS leverages the safety features inherent in Earth-based reactors and also integrates the wide range neutron detector (WRND). A neutron detector provides the input that allows the RICS to do its job. The RICS is based on proven technology currently in use at a nuclear research facility. In its most basic form, the RICS is a ruggedized, compact data-acquisition and control system that could be adapted to support a wide variety of harsh environments. As such, the RICS could be a useful instrument outside the scope of a nuclear reactor, including military applications where failsafe data acquisition and control is required with stringent size, weight, and power constraints.
Mitchell, D A; von Meien, O F
2000-04-20
Zymotis bioreactors for solid-state fermentation (SSF) are packed-bed bioreactors with internal cooling plates. This design has potential to overcome the problem of heat removal, which is one of the main challenges in SSF. In ordinary packed-bed bioreactors, which lack internal plates, large axial temperature gradients arise, leading to poor microbial growth in the end of the bed near the air outlet. The Zymotis design is suitable for SSF processes in which the substrate bed must be maintained static, but little is known about how to design and operate Zymotis bioreactors. We use a two-dimensional heat transfer model, describing the growth of Aspergillus niger on a starchy substrate, to provide guidelines for the optimum design and operation of Zymotis bioreactors. As for ordinary packed-beds, the superficial velocity of the process air is a key variable. However, the Zymotis design introduces other important variables, namely, the spacing between the internal cooling plates and the temperature of the cooling water. High productivities can be achieved at large scale, but only if small spacings between the cooling plates are used, and if the cooling water temperature is varied during the fermentation in response to bed temperatures. Copyright 2000 John Wiley & Sons, Inc.
Latysh, Natalie E.; Wetherbee, Gregory A.
2007-01-01
The U.S. Geological Survey (USGS) Branch of Quality Systems operates external quality assurance programs for the National Atmospheric Deposition Program/Mercury Deposition Network (NADP/MDN). Beginning in 2004, three programs have been implemented: the system blank program, the interlaboratory comparison program, and the blind audit program. Each program was designed to measure error contributed by specific components in the data-collection process. The system blank program assesses contamination that may result from sampling equipment, field exposure, and routine handling and processing of the wet-deposition samples. The interlaboratory comparison program evaluates bias and precision of analytical results produced by the Mercury Analytical Laboratory (HAL) for the NADP/MDN, operated by Frontier GeoSciences, Inc. The HAL's performance is compared with the performance of five other laboratories. The blind audit program assesses bias and variability of MDN data produced by the HAL using solutions disguised as environmental samples to ascertain true laboratory performance. This report documents the implementation of quality assurance procedures for the NADP/MDN and the operating procedures for each of the external quality assurance programs conducted by the USGS. The USGS quality assurance information provides a measure of confidence to NADP/MDN data users that measurement variability is distinguished from environmental signals.
NASA Astrophysics Data System (ADS)
Su, Yung-Chao; Wu, Shin-Tza
2017-09-01
We study theoretically the teleportation of a controlled-phase (cz) gate through measurement-based quantum-information processing for continuous-variable systems. We examine the degree of entanglement in the output modes of the teleported cz-gate for two classes of resource states: the canonical cluster states that are constructed via direct implementations of two-mode squeezing operations and the linear-optical version of cluster states which are built from linear-optical networks of beam splitters and phase shifters. In order to reduce the excess noise arising from finite-squeezed resource states, teleportation through resource states with different multirail designs will be considered and the enhancement of entanglement in the teleported cz gates will be analyzed. For multirail cluster with an arbitrary number of rails, we obtain analytical expressions for the entanglement in the output modes and analyze in detail the results for both classes of resource states. At the same time, we also show that for uniformly squeezed clusters the multirail noise reduction can be optimized when the excess noise is allocated uniformly to the rails. To facilitate the analysis, we develop a trick with manipulations of quadrature operators that can reveal rather efficiently the measurement sequence and corrective operations needed for the measurement-based gate teleportation, which will also be explained in detail.
Kaplan, Ulas; Tivnan, Terrence
2014-01-01
Intrapersonal variability and multiplicity in the complexity of moral motivation were examined from Dynamic Systems and Self-Determination Theory perspectives. L. Kohlberg's (1969) stages of moral development are reconceptualized as soft-assembled and dynamically transformable process structures of motivation that may operate simultaneously within person in different degrees. Moral motivation is conceptualized as the real-time process of self-organization of cognitive and emotional dynamics out of which moral judgment and action emerge. A detailed inquiry into intrapersonal variation in moral motivation is carried out based on the differential operation of multiple motivational structures. A total of 74 high school students and 97 college students participated in the study by completing a new questionnaire, involving 3 different hypothetical moral judgments. As hypothesized, findings revealed significant multiplicity in the within-person operation of developmental stage structures, and intrapersonal variability in the degrees to which stages were used. Developmental patterns were found in terms of different distributions of multiple stages between high school and college samples, as well as the association between age and overall motivation scores. Differential relations of specific emotions to moral motivation revealed and confirmed the value of differentiating multiple emotions. Implications of the present theoretical perspective and the findings for understanding the complexity of moral judgment and motivation are discussed.
Low-sensitivity, frequency-selective amplifier circuits for hybrid and bipolar fabrication.
NASA Technical Reports Server (NTRS)
Pi, C.; Dunn, W. R., Jr.
1972-01-01
A network is described which is suitable for realizing a low-sensitivity high-Q second-order frequency-selective amplifier for high-frequency operation. Circuits are obtained from this network which are well suited for realizing monolithic integrated circuits and which do not require any process steps more critical than those used for conventional monolithic operational and video amplifiers. A single chip version using compatible thin-film techniques for the frequency determination elements is then feasible. Center frequency and bandwidth can be set independently by trimming two resistors. The frequency selective circuits have a low sensitivity to the process variables, and the sensitivity of the center frequency and bandwidth to changes in temperature is very low.
Silveira, Jefferson E; Cardoso, Tais O; Barreto-Rodrigues, Marcio; Zazo, Juan A; Casas, José A
2018-05-01
This work assesses the role of the operational conditions upon the electro-activation of persulfate (PS) using sacrificed iron electrode as a continuous low-cost Fe 2+ source. An aqueous phenol solution (100 mg L -1 ) was selected as model effluent. The studied variables include current density (1-10 mA cm -2 ), persulfate concentration (0.7-2.85 g L -1 ), temperature (30-90°C) and the solution conductivity (2.7-20.7 mS cm -1 ) using Na 2 SO 4 and NaCl as supporting electrolyte. A mineralization degree of around 80% with Na 2 SO 4 and 92% in presence of NaCl was achieved at 30°C using 2.15 g L -1 PS at the lowest current density tested (1 mA cm -2 ). Besides PS concentration, temperature was the main variable affecting the process. In the range of 30-70°C, it showed a positive effect, achieving TOC conversion above 95% (using Na 2 SO 4 under the previous conditions) along with a significant increase in iron sludge, which adversely affects the economy of the process. A lumped and simplified kinetic model based on persulfate consumption and TOC mineralization is suggested. The activation energy obtained for the TOC decay was 29 kJ mol -1 . An estimated operating cost of US$ 3.00 per m 3 was obtained, demonstrating the economic feasibility of this process.
Application of Advanced Process Control techniques to a pusher type reheating furnace
NASA Astrophysics Data System (ADS)
Zanoli, S. M.; Pepe, C.; Barboni, L.
2015-11-01
In this paper an Advanced Process Control system aimed at controlling and optimizing a pusher type reheating furnace located in an Italian steel plant is proposed. The designed controller replaced the previous control system, based on PID controllers manually conducted by process operators. A two-layer Model Predictive Control architecture has been adopted that, exploiting a chemical, physical and economic modelling of the process, overcomes the limitations of plant operators’ mental model and knowledge. In addition, an ad hoc decoupling strategy has been implemented, allowing the selection of the manipulated variables to be used for the control of each single process variable. Finally, in order to improve the system flexibility and resilience, the controller has been equipped with a supervision module. A profitable trade-off between conflicting specifications, e.g. safety, quality and production constraints, energy saving and pollution impact, has been guaranteed. Simulation tests and real plant results demonstrated the soundness and the reliability of the proposed system.
Surface laser marking optimization using an experimental design approach
NASA Astrophysics Data System (ADS)
Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.
2017-04-01
Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.
Vehicle energy conservation indicating device and process for use
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crump, J.M.
A vehicle energy conservation indicating device comprises an integrated instrument cluster functioning basically as a nomographic computing mechanism. The odometer distance traveled indicator computing mechanism is linked with the fuel indicating gauge mechanism such that a three variable equation computing mechanism is obtained. The three variables are distance traveled, quantity of fuel consumed and distance traveled per unit of fuel consumed. Energy conservation is achieved by operating the vehicle under such performance conditions as to produce the highest possible value for distance traveled per unit of fuel consumed. The instrument panel cluster brings the operator's attention to focus upon andmore » continuously stimulated to conserving energy. Furthermore, the vehicle energy conservation indicating device can be adapted for recording these performance variables on tape type print out. The speedometer advises the vehicle operator when he is obeying or breaking the speed laws which are enforced and monitored by the police with specific punishment prescribed for violations of the law. At this time there is no comparable procedure for enforcing vehicle energy conservation. Thus, this direct read out of distance traveled per unit of energy will moderate the operation in an analogous manner similar to subliminal advertising. This device becomes the focal point of the instrument panel along with the speedometer, thereby providing constant motivation to obey both the speed and energy conservation laws.« less
NOAA's Satellite Climate Data Records: The Research to Operations Process and Current State
NASA Astrophysics Data System (ADS)
Privette, J. L.; Bates, J. J.; Kearns, E. J.; NOAA's Climate Data Record Program
2011-12-01
In support of NOAA's mandate to provide climate products and services to the Nation, the National Climatic Data Center initiated the satellite Climate Data Record (CDR) Program. The Program develops and sustains climate information products derived from satellite data that NOAA has collected over the past 30+ years. These are the longest sets of continuous global measurements in existence. Data from other satellite programs, including those in NASA, the Department of Defense, and foreign space agencies, are also used. NOAA is now applying advanced analysis techniques to these historic data. This process is unraveling underlying climate trend and variability information and returning new value from the data. However, the transition of complex data processing chains, voluminous data products and documentation into an systematic, configuration controlled context involves many challenges. In this presentation, we focus on the Program's process for research-to-operations transition and the evolving systems designed to ensure transparency, security, economy and authoritative value. The Program has adopted a two-phase process defined by an Initial Operational Capability (IOC) and a Full Operational Capability (FOC). The principles and procedures for IOC are described, as well as the process for moving CDRs from IOC to FOC. Finally, we will describe the state of the CDRs in all phases the Program, with an emphasis on the seven community-developed CDRs transitioned to NOAA in 2011. Details on CDR access and distribution will be provided.
Kim, Dongcheol; Rhee, Sehun
2002-01-01
CO(2) welding is a complex process. Weld quality is dependent on arc stability and minimizing the effects of disturbances or changes in the operating condition commonly occurring during the welding process. In order to minimize these effects, a controller can be used. In this study, a fuzzy controller was used in order to stabilize the arc during CO(2) welding. The input variable of the controller was the Mita index. This index estimates quantitatively the arc stability that is influenced by many welding process parameters. Because the welding process is complex, a mathematical model of the Mita index was difficult to derive. Therefore, the parameter settings of the fuzzy controller were determined by performing actual control experiments without using a mathematical model of the controlled process. The solution, the Taguchi method was used to determine the optimal control parameter settings of the fuzzy controller to make the control performance robust and insensitive to the changes in the operating conditions.
Eusebi, Anna Laura; Massi, Alessandro; Sablone, Emiliano; Santinelli, Martina; Battistoni, Paolo
2012-01-01
The treatment of industrial liquid wastes is placed in a wide context of technologies and is related to the high variability of the influent physical-chemical characteristics. In this condition, the achievement of satisfactory biological unit efficiency could be complicated. An alternate process (AC) with aerobic and anoxic phases fed in a continuous way was evaluated as an operative solution to optimize the performance of the biological reactor in a platform for the treatment of industrial liquid wastes. The process application has determined a stable quality effluent with an average concentration of 25 mg TN L(-1), according to the law limits. The use of discharged wastewaters as rapid carbon sources to support the anoxic phase of the alternate cycle, realizes a reduction of TN of 95% without impact on the total operative costs. The evaluation of the micro-pollutants behaviour has highlighted a bio-adsorption phenomenon in the first reactor. The implementation of the process defined 31% of energy saving during period 1 and 19% for the periods 2, 3 and 4.
NASA Technical Reports Server (NTRS)
1981-01-01
Liquid diffusion masks and liquid applied dopants to replace the CVD Silox masking and gaseous diffusion operations specified for forming junctions in the Westinghouse baseline process sequence for producing solar cells from dendritic web silicon were investigated. The baseline diffusion masking and drive processes were compared with those involving direct liquid applications to the dendritic web silicon strips. Attempts were made to control the number of variables by subjecting dendritic web strips cut from a single web crystal to both types of operations. Data generated reinforced earlier conclusions that efficiency levels at least as high as those achieved with the baseline back junction formation process can be achieved using liquid diffusion masks and liquid dopants. The deliveries of dendritic web sheet material and solar cells specified by the current contract were made as scheduled.
Ajala, E O; Aberuagba, F; Olaniyan, A M; Onifade, K R
2016-01-01
Shea butter (SB) was extracted from its kernel by using n-hexane as solvent in an optimization study. This was to determine the optima operating variables that would give optimum yield of SB and to study the effect of solvent on the physico-chemical properties and chemical composition of SB extracted using n-hexane. A Box-behnken response surface methodology (RSM) was used for the optimization study while statistical analysis using ANOVA was used to test the significance of the variables for the process. The variables considered for this study were: sample weight (g), solvent volume (ml) and extraction time (min). The physico-chemical properties of SB extracted were determined using standard methods and Fourier Transform Infrared Spectroscopy (FTIR) for the chemical composition. The results of RSM analysis showed that the three variables investigated have significant effect (p < 0.05) on the %yield of SB, with R(2) - 0.8989 which showed good fitness of a second-order model. Based on this model, optima operating variables for the extraction process were established as: sample weight of 30.04 g, solvent volume of 346.04 ml and extraction time of 40 min, which gave 66.90 % yield of SB. Furthermore, the result of the physico-chemical properties obtained for the shea butter extracted using traditional method (SBT) showed that it is a more suitable raw material for food, biodiesel production, cosmetics, medicinal and pharmaceutical purposes than shea butter extracted using solvent extraction method (SBS). Fourier Transform Infrared Spectroscopy (FTIR) results obtained for the two samples were similar to what was obtainable from other vegetable oil.
2006-12-01
the goal of achieving zero waste is impractical. Thus, the concept of Lean has to be slightly modified to adjust for the uncertainty and variability...personnel are qualified as Black or Green belts, this may become an issue for them down the road. 2. Criticism Two The goal of Lean is to achieve “ Zero ... Waste ,” therefore, how can the military achieve Lean in such a vast area of uncertainty and variability? Under the environment that DoD operates in
USDA-ARS?s Scientific Manuscript database
At the Little River Watershed (LRW) heterogeneous landscape near Tifton Georgia US an in situ network of stations operated by the US Department of Agriculture-Agriculture Research Service (USDA-ARS-SEWRL) was established in 2003 for the long term study of climatic and soil biophysical processes. To ...
Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults
ERIC Educational Resources Information Center
Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth
2013-01-01
Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…
Annual Report for Contract Number N00014-88-K-0641 (Carnegie Mellon Univ)
1991-09-30
labor -intensive process of data conversion. " The described mechanism might provide a course of action for coping with the Software Release Problem, i.e...8217 reover alorihm wll uarmet:thedatbaseis estred All subsequent setq operations to variable will change the bind- to aprevousy saed onsiten stae. ng
Test Operations Procedure (TOP) 06-2-301 Wind Testing
2017-06-14
critical to ensure that the test item is exposed to the required wind speeds. This may be an iterative process as the fan blade pitch, fan speed...fan speed is the variable that is adjusted to reach the required velocities. Calibration runs with a range of fan speeds are performed and a
AMMONIA ABSORPTION/AMMONIUM BISULFATE REGENERATION PILOT PLANT FOR FLUE GAS DESULFURIZATION
The report gives results of a pilot-plant study of the ammonia absorption/ammonium bisulfate regeneration process for removing SO2 from the stack gas of coal-fired power plants. Data were developed on the effects of such operating variable in the absorption of SO2 by ammoniacal l...
Evolutionary algorithm for vehicle driving cycle generation.
Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott
2011-09-01
Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.
Orlandini, S; Pasquini, B; Caprini, C; Del Bubba, M; Squarcialupi, L; Colotta, V; Furlanetto, S
2016-09-30
A comprehensive strategy involving the use of mixture-process variable (MPV) approach and Quality by Design principles has been applied in the development of a capillary electrophoresis method for the simultaneous determination of the anti-inflammatory drug diclofenac and its five related substances. The selected operative mode consisted in microemulsion electrokinetic chromatography with the addition of methyl-β-cyclodextrin. The critical process parameters included both the mixture components (MCs) of the microemulsion and the process variables (PVs). The MPV approach allowed the simultaneous investigation of the effects of MCs and PVs on the critical resolution between diclofenac and its 2-deschloro-2-bromo analogue and on analysis time. MPV experiments were used both in the screening phase and in the Response Surface Methodology, making it possible to draw MCs and PVs contour plots and to find important interactions between MCs and PVs. Robustness testing was carried out by MPV experiments and validation was performed following International Conference on Harmonisation guidelines. The method was applied to a real sample of diclofenac gastro-resistant tablets. Copyright © 2016 Elsevier B.V. All rights reserved.
Deygout, François; Auburtin, Guy
2015-03-01
Variability in occupational exposure levels to bitumen emissions has been observed during road paving operations. This is due to recurrent field factors impacting the level of exposure experienced by workers during paving. The present study was undertaken in order to quantify the impact of such factors. Pre-identified variables currently encountered in the field were monitored and recorded during paving surveys, and were conducted randomly covering current applications performed by road crews. Multivariate variance analysis and regressions were then used on computerized field data. The statistical investigations were limited due to the relatively small size of the study (36 data). Nevertheless, the particular use of the step-wise regression tool enabled the quantification of the impact of several predictors despite the existing collinearity between variables. The two bitumen organic fractions (particulates and volatiles) are associated with different field factors. The process conditions (machinery used and delivery temperature) have a significant impact on the production of airborne particulates and explain up to 44% of variability. This confirms the outcomes described by previous studies. The influence of the production factors is limited though, and should be complemented by studying factors involving the worker such as work style and the mix of tasks. The residual volatile compounds, being part of the bituminous binder and released during paving operations, control the volatile emissions; 73% of the encountered field variability is explained by the composition of the bitumen batch. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Modeling of the adsorptive removal of arsenic(III) using plant biomass: a bioremedial approach
NASA Astrophysics Data System (ADS)
Roy, Palas; Dey, Uttiya; Chattoraj, Soumya; Mukhopadhyay, Debasis; Mondal, Naba Kumar
2017-06-01
In the present work, the possibility of using a non-conventional finely ground (250 μm) Azadirachta indica (neem) bark powder [AiBP] has been tested as a low-cost biosorbent for the removal of arsenic(III) from water. The removal of As(III) was studied by performing a series of biosorption experiments (batch and column). The biosorption behavior of As(III) for batch and column operations were examined in the concentration ranges of 50-500 µg L-1 and 500.0-2000.0 µg L-1, respectively. Under optimized batch conditions, the AiBP could remove up to 89.96 % of As(III) in water system. The artificial neural network (ANN) model was developed from batch experimental data sets which provided reasonable predictive performance ( R 2 = 0.961; 0.954) of As(III) biosorption. In batch operation, the initial As(III) concentration had the most significant impact on the biosorption process. For column operation, central composite design (CCD) was applied to investigate the influence on the breakthrough time for optimization of As(III) biosorption process and evaluation of interacting effects of different operating variables. The optimized result of CCD revealed that the AiBP was an effective and economically feasible biosorbent with maximum breakthrough time of 653.9 min, when the independent variables were retained at 2.0 g AiBP dose, 2000.0 µg L-1 initial As(III) concentrations, and 3.0 mL min-1 flow rate, at maximum desirability value of 0.969.
Identifying causal linkages between environmental variables and African conflicts
NASA Astrophysics Data System (ADS)
Nguy-Robertson, A. L.; Dartevelle, S.
2017-12-01
Environmental variables that contribute to droughts, flooding, and other natural hazards are often identified as factors contributing to conflict; however, few studies attempt to quantify these causal linkages. Recent research has demonstrated that the environment operates within a dynamical system framework and the influence of variables can be identified from convergent cross mapping (CCM) between shadow manifolds. We propose to use CCM to identify causal linkages between environmental variables and incidences of conflict. This study utilizes time series data from Climate Forecast System ver. 2 and MODIS satellite sensors processed using Google Earth Engine to aggregate country and regional trends. These variables are then compared to Armed Conflict Location & Event Data Project observations at similar scales. Results provide relative rankings of variables and their linkage to conflict. Being able to identify which factors contributed more strongly to a conflict can allow policy makers to prepare solutions to mitigate future crises. Knowledge of the primary environmental factors can lead to the identification of other variables to examine in the causal network influencing conflict.
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
Particle Engineering in Pharmaceutical Solids Processing: Surface Energy Considerations
Williams, Daryl R.
2015-01-01
During the past 10 years particle engineering in the pharmaceutical industry has become a topic of increasing importance. Engineers and pharmacists need to understand and control a range of key unit manufacturing operations such as milling, granulation, crystallisation, powder mixing and dry powder inhaled drugs which can be very challenging. It has now become very clear that in many of these particle processing operations, the surface energy of the starting, intermediate or final products is a key factor in understanding the processing operation and or the final product performance. This review will consider the surface energy and surface energy heterogeneity of crystalline solids, methods for the measurement of surface energy, effects of milling on powder surface energy, adhesion and cohesion on powder mixtures, crystal habits and surface energy, surface energy and powder granulation processes, performance of DPI systems and finally crystallisation conditions and surface energy. This review will conclude that the importance of surface energy as a significant factor in understanding the performance of many particulate pharmaceutical products and processes has now been clearly established. It is still nevertheless, work in progress both in terms of development of methods and establishing the limits for when surface energy is the key variable of relevance. PMID:25876912
Variability in the skin exposure of machine operators exposed to cutting fluids.
Wassenius, O; Järvholm, B; Engström, T; Lillienberg, L; Meding, B
1998-04-01
This study describes a new technique for measuring skin exposure to cutting fluids and evaluates the variability of skin exposure among machine operators performing cyclic (repetitive) work. The technique is based on video recording and subsequent analysis of the video tape by means of computer-synchronized video equipment. The time intervals at which the machine operator's hand was exposed to fluid were registered, and the total wet time of the skin was calculated by assuming different evaporation times for the fluid. The exposure of 12 operators with different work methods was analyzed in 6 different workshops, which included a range of machine types, from highly automated metal cutting machines (ie, actual cutting and chip removal machines) requiring operator supervision to conventional metal cutting machines, where the operator was required to maneuver the machine and manually exchange products. The relative wet time varied between 0% and 100%. A significant association between short cycle time and high relative wet time was noted. However, there was no relationship between the degree of automatization of the metal cutting machines and wet time. The study shows that skin exposure to cutting fluids can vary considerably between machine operators involved in manufacturing processes using different types of metal cutting machines. The machine type was not associated with dermal wetness. The technique appears to give objective information about dermal wetness.
Autonomous Dome for a Robotic Telescope
NASA Astrophysics Data System (ADS)
Kumar, A.; Sengupta, A.; Ganesh, S.
2016-12-01
The Physical Research Laboratory operates a 50 cm robotic observatory at Mount Abu (Rajsthan, India). This Automated Telescope for Variability Studies (ATVS) makes use of the Remote Telescope System 2 (RTS2) for autonomous operations. The observatory uses a 3.5 m dome from Sirius Observatories. We have developed electronics using Arduino electronic circuit boards with home grown logic and software to control the dome operations. We are in the process of completing the drivers to link our Arduino based dome controller with RTS2. This document is a short description of the various phases of the development and their integration to achieve the required objective.
An examination of loads and responses of a wind turbine undergoing variable-speed operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, A.D.; Buhl, M.L. Jr.; Bir, G.S.
1996-11-01
The National Renewable Energy Laboratory has recently developed the ability to predict turbine loads and responses for machines undergoing variable-speed operation. The wind industry has debated the potential benefits of operating wind turbine sat variable speeds for some time. Turbine system dynamic responses (structural response, resonance, and component interactions) are an important consideration for variable-speed operation of wind turbines. The authors have implemented simple, variable-speed control algorithms for both the FAST and ADAMS dynamics codes. The control algorithm is a simple one, allowing the turbine to track the optimum power coefficient (C{sub p}). The objective of this paper is tomore » show turbine loads and responses for a particular two-bladed, teetering-hub, downwind turbine undergoing variable-speed operation. The authors examined the response of the machine to various turbulent wind inflow conditions. In addition, they compare the structural responses under fixed-speed and variable-speed operation. For this paper, they restrict their comparisons to those wind-speed ranges for which limiting power by some additional control strategy (blade pitch or aileron control, for example) is not necessary. The objective here is to develop a basic understanding of the differences in loads and responses between the fixed-speed and variable-speed operation of this wind turbine configuration.« less
Ma, H. -Y.; Chuang, C. C.; Klein, S. A.; ...
2015-11-06
Here, we present an improved procedure of generating initial conditions (ICs) for climate model hindcast experiments with specified sea surface temperature and sea ice. The motivation is to minimize errors in the ICs and lead to a better evaluation of atmospheric parameterizations' performance in the hindcast mode. We apply state variables (horizontal velocities, temperature and specific humidity) from the operational analysis/reanalysis for the atmospheric initial states. Without a data assimilation system, we apply a two-step process to obtain other necessary variables to initialize both the atmospheric (e.g., aerosols and clouds) and land models (e.g., soil moisture). First, we nudge onlymore » the model horizontal velocities towards operational analysis/reanalysis values, given a 6-hour relaxation time scale, to obtain all necessary variables. Compared to the original strategy in which horizontal velocities, temperature and specific humidity are nudged, the revised approach produces a better representation of initial aerosols and cloud fields which are more consistent and closer to observations and model's preferred climatology. Second, we obtain land ICs from an offline land model simulation forced with observed precipitation, winds, and surface fluxes. This approach produces more realistic soil moisture in the land ICs. With this refined procedure, the simulated precipitation, clouds, radiation, and surface air temperature over land are improved in the Day 2 mean hindcasts. Following this procedure, we propose a “Core” integration suite which provides an easily repeatable test allowing model developers to rapidly assess the impacts of various parameterization changes on the fidelity of modelled cloud-associated processes relative to observations.« less
NASA Astrophysics Data System (ADS)
Ma, H.-Y.; Chuang, C. C.; Klein, S. A.; Lo, M.-H.; Zhang, Y.; Xie, S.; Zheng, X.; Ma, P.-L.; Zhang, Y.; Phillips, T. J.
2015-12-01
We present an improved procedure of generating initial conditions (ICs) for climate model hindcast experiments with specified sea surface temperature and sea ice. The motivation is to minimize errors in the ICs and lead to a better evaluation of atmospheric parameterizations' performance in the hindcast mode. We apply state variables (horizontal velocities, temperature, and specific humidity) from the operational analysis/reanalysis for the atmospheric initial states. Without a data assimilation system, we apply a two-step process to obtain other necessary variables to initialize both the atmospheric (e.g., aerosols and clouds) and land models (e.g., soil moisture). First, we nudge only the model horizontal velocities toward operational analysis/reanalysis values, given a 6 h relaxation time scale, to obtain all necessary variables. Compared to the original strategy in which horizontal velocities, temperature, and specific humidity are nudged, the revised approach produces a better representation of initial aerosols and cloud fields which are more consistent and closer to observations and model's preferred climatology. Second, we obtain land ICs from an off-line land model simulation forced with observed precipitation, winds, and surface fluxes. This approach produces more realistic soil moisture in the land ICs. With this refined procedure, the simulated precipitation, clouds, radiation, and surface air temperature over land are improved in the Day 2 mean hindcasts. Following this procedure, we propose a "Core" integration suite which provides an easily repeatable test allowing model developers to rapidly assess the impacts of various parameterization changes on the fidelity of modeled cloud-associated processes relative to observations.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Bergerson, Joule A; Kofoworola, Oyeshola; Charpentier, Alex D; Sleep, Sylvia; Maclean, Heather L
2012-07-17
Life cycle greenhouse gas (GHG) emissions associated with two major recovery and extraction processes currently utilized in Alberta's oil sands, surface mining and in situ, are quantified. Process modules are developed and integrated into a life cycle model-GHOST (GreenHouse gas emissions of current Oil Sands Technologies) developed in prior work. Recovery and extraction of bitumen through surface mining and in situ processes result in 3-9 and 9-16 g CO(2)eq/MJ bitumen, respectively; upgrading emissions are an additional 6-17 g CO(2)eq/MJ synthetic crude oil (SCO) (all results are on a HHV basis). Although a high degree of variability exists in well-to-wheel emissions due to differences in technologies employed, operating conditions, and product characteristics, the surface mining dilbit and the in situ SCO pathways have the lowest and highest emissions, 88 and 120 g CO(2)eq/MJ reformulated gasoline. Through the use of improved data obtained from operating oil sands projects, we present ranges of emissions that overlap with emissions in literature for conventional crude oil. An increased focus is recommended in policy discussions on understanding interproject variability of emissions of both oil sands and conventional crudes, as this has not been adequately represented in previous studies.
Ko, Jordon; Su, Wen-Jun; Chien, I-Lung; Chang, Der-Ming; Chou, Sheng-Hsin; Zhan, Rui-Yu
2010-02-01
The rice straw, an agricultural waste from Asians' main provision, was collected as feedstock to convert cellulose into ethanol through the enzymatic hydrolysis and followed by the fermentation process. When the two process steps are performed sequentially, it is referred to as separate hydrolysis and fermentation (SHF). The steps can also be performed simultaneously, i.e., simultaneous saccharification and fermentation (SSF). In this research, the kinetic model parameters of the cellulose saccharification process step using the rice straw as feedstock is obtained from real experimental data of cellulase hydrolysis. Furthermore, this model can be combined with a fermentation model at high glucose and ethanol concentrations to form a SSF model. The fermentation model is based on cybernetic approach from a paper in the literature with an extension of including both the glucose and ethanol inhibition terms to approach more to the actual plants. Dynamic effects of the operating variables in the enzymatic hydrolysis and the fermentation models will be analyzed. The operation of the SSF process will be compared to the SHF process. It is shown that the SSF process is better in reducing the processing time when the product (ethanol) concentration is high. The means to improve the productivity of the overall SSF process, by properly using aeration during the batch operation will also be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Ming; Deng, Yi
2015-02-06
El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The future projection of the ENSO and AM variability, however, remains highly uncertain with the state-of-the-art coupled general circulation models. A comprehensive understanding of the factors responsible for the inter-model discrepancies in projecting future changes in the ENSO and AM variability, in terms of multiple feedback processes involved, has yet to be achieved. The proposed research aims to identify sources of such uncertainty and establish a set of process-resolving quantitative evaluations of the existing predictions ofmore » the future ENSO and AM variability. The proposed process-resolving evaluations are based on a feedback analysis method formulated in Lu and Cai (2009), which is capable of partitioning 3D temperature anomalies/perturbations into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. Taking advantage of the high-resolution, multi-model ensemble products from the Coupled Model Intercomparison Project Phase 5 (CMIP5) soon to be available at the Lawrence Livermore National Lab, we will conduct a process-resolving decomposition of the global three-dimensional (3D) temperature (including SST) response to the ENSO and AM variability in the preindustrial, historical and future climate simulated by these models. Specific research tasks include 1) identifying the model-observation discrepancies in the global temperature response to ENSO and AM variability and attributing such discrepancies to specific feedback processes, 2) delineating the influence of anthropogenic radiative forcing on the key feedback processes operating on ENSO and AM variability and quantifying their relative contributions to the changes in the temperature anomalies associated with different phases of ENSO and AMs, and 3) investigating the linkages between model feedback processes that lead to inter-model differences in time-mean temperature projection and model feedback processes that cause inter-model differences in the simulated ENSO and AM temperature response. Through a thorough model-observation and inter-model comparison of the multiple energetic processes associated with ENSO and AM variability, the proposed research serves to identify key uncertainties in model representation of ENSO and AM variability, and investigate how the model uncertainty in predicting time-mean response is related to the uncertainty in predicting response of the low-frequency modes. The proposal is thus a direct response to the first topical area of the solicitation: Interaction of Climate Change and Low Frequency Modes of Natural Climate Variability. It ultimately supports the accomplishment of the BER climate science activity Long Term Measure (LTM): "Deliver improved scientific data and models about the potential response of the Earth's climate and terrestrial biosphere to increased greenhouse gas levels for policy makers to determine safe levels of greenhouse gases in the atmosphere."« less
Pharmaceutical quality by design: product and process development, understanding, and control.
Yu, Lawrence X
2008-04-01
The purpose of this paper is to discuss the pharmaceutical Quality by Design (QbD) and describe how it can be used to ensure pharmaceutical quality. The QbD was described and some of its elements identified. Process parameters and quality attributes were identified for each unit operation during manufacture of solid oral dosage forms. The use of QbD was contrasted with the evaluation of product quality by testing alone. The QbD is a systemic approach to pharmaceutical development. It means designing and developing formulations and manufacturing processes to ensure predefined product quality. Some of the QbD elements include: Defining target product quality profile; Designing product and manufacturing processes; Identifying critical quality attributes, process parameters, and sources of variability; Controlling manufacturing processes to produce consistent quality over time. Using QbD, pharmaceutical quality is assured by understanding and controlling formulation and manufacturing variables. Product testing confirms the product quality. Implementation of QbD will enable transformation of the chemistry, manufacturing, and controls (CMC) review of abbreviated new drug applications (ANDAs) into a science-based pharmaceutical quality assessment.
Nonlinear and Digital Man-machine Control Systems Modeling
NASA Technical Reports Server (NTRS)
Mekel, R.
1972-01-01
An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.
Energy efficiency technologies in cement and steel industry
NASA Astrophysics Data System (ADS)
Zanoli, Silvia Maria; Cocchioni, Francesco; Pepe, Crescenzo
2018-02-01
In this paper, Advanced Process Control strategies aimed at energy efficiency achievement and improvement in cement and steel industry are proposed. A flexible and smart control structure constituted by several functional modules and blocks has been developed. The designed control strategy is based on Model Predictive Control techniques, formulated on linear models. Two industrial control solutions have been developed, oriented to energy efficiency and process control improvement in cement industry clinker rotary kilns (clinker production phase) and in steel industry billets reheating furnaces. Tailored customization procedures for the design of ad hoc control systems have been executed, based on the specific needs and specifications of the analysed processes. The installation of the developed controllers on cement and steel plants produced significant benefits in terms of process control which resulted in working closer to the imposed operating limits. With respect to the previous control systems, based on local controllers and/or operators manual conduction, more profitable configurations of the crucial process variables have been provided.
Comparison between variable and constant rotor speed operation on WINDMEL-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasamoto, Akira; Matsumiya, Hikaru; Kawamura, Shunji
1996-10-01
On a wind turbine control system for rotor revolution speed, it is believed that variable speed operation has the advantages over constant speed from a view point of both aerodynamics and mechanics. However, there is no experimental study which shows the differences. In this report, the authors intend to clarify the differences about shaft torque by using experimental data, from a new wind turbine system which has both variable and constant operation. The result in observation of the experimental data shows that variable speed operational shaft torque is lower than constant speed operational one.
Variability as a Subject Matter in a Science of Behavior: Reply to Commentaries
ERIC Educational Resources Information Center
Barba, Lourenco de Souza
2012-01-01
In his article, the author claimed that studies of operant variability that use a lag-"n" or threshold procedure and measure the obtained variability through the change in U value fail to provide direct evidence that variability is an operant dimension of behavior. To do so, he adopted Catania's (1973) concept of the operant, which takes the…
Córdova, Andrés; Astudillo, Carolina; Vera, Carlos; Guerrero, Cecilia; Illanes, Andrés
2016-04-10
The performance of an ultrafiltration membrane bioreactor for galacto-oligosaccharides (GOS) synthesis using high lactose concentrations (470 g/L) and β-galactosidase from Aspergillus oryzae was assessed. Tested processing variables were: transmembrane-pressure (PT), crossflow-velocity (CFV) and temperature. Results showed that processing variables had significant effect on the yield, the enzyme productivity and the flux but did not on GOS concentration and reaction conversion obtained. As expected, the use of high turbulences improved mass transfer and reduced the membrane fouling, but the use of very high crossflow-velocities caused operational instability due to vortex formation and lactose precipitation. The use of a desirability function allowed determining optimal conditions which were: PT (4.38 bar), CFV (7.35 m/s) and temperature (53.1 °C), optimizing simultaneously flux and specific enzyme productivity Under these optimal processing conditions, shear-stress and temperature did not affect the enzyme but long-term operation was limited by flux decay. In comparison to a conventional batch system, at 12.5h of processing time, the continuous GOS synthesis in the UF-MBR increased significantly the amount of processed substrate and a 2.44-fold increase in the amount of GOS produced per unit mass of catalyst was obtained with respect to a conventional batch system. Furthermore, these results can be improved by far by tuning the membranearea/reactionvolume ratio, showing that the use of an UF-MBR is an attractive alternative for the GOS synthesis at very high lactose concentrations. Copyright © 2016 Elsevier B.V. All rights reserved.
The Policy Formation Process: A Conceptual Framework for Analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Fuchs, E. F.
1972-01-01
A conceptual framework for analysis which is intended to assist both the policy analyst and the policy researcher in their empirical investigations into policy phenomena is developed. It is meant to facilitate understanding of the policy formation process by focusing attention on the basic forces shaping the main features of policy formation as a dynamic social-political-organizational process. The primary contribution of the framework lies in its capability to suggest useful ways of looking at policy formation reality. It provides the analyst and the researcher with a group of indicators which suggest where to look and what to look for when attempting to analyze and understand the mix of forces which energize, maintain, and direct the operation of strategic level policy systems. The framework also highlights interconnections, linkage, and relational patterns between and among important variables. The framework offers an integrated set of conceptual tools which facilitate understanding of and research on the complex and dynamic set of variables which interact in any major strategic level policy formation process.
Instrumentation, control, and automation for submerged anaerobic membrane bioreactors.
Robles, Ángel; Durán, Freddy; Ruano, María Victoria; Ribes, Josep; Rosado, Alfredo; Seco, Aurora; Ferrer, José
2015-01-01
A submerged anaerobic membrane bioreactor (AnMBR) demonstration plant with two commercial hollow-fibre ultrafiltration systems (PURON®, Koch Membrane Systems, PUR-PSH31) was designed and operated for urban wastewater treatment. An instrumentation, control, and automation (ICA) system was designed and implemented for proper process performance. Several single-input-single-output (SISO) feedback control loops based on conventional on-off and PID algorithms were implemented to control the following operating variables: flow-rates (influent, permeate, sludge recycling and wasting, and recycled biogas through both reactor and membrane tanks), sludge wasting volume, temperature, transmembrane pressure, and gas sparging. The proposed ICA for AnMBRs for urban wastewater treatment enables the optimization of this new technology to be achieved with a high level of process robustness towards disturbances.
Single-mode VCSEL operation via photocurrent feedback
NASA Astrophysics Data System (ADS)
Riyopoulos, Spilios
1999-04-01
On-axis channeling through the use of photoactive layers in VCSEL cavities is proposed to counteract hole burning and mode switching. The photoactive layers act as variable resistivity screens whose radial `aperture' is controlled by the light itself. It is numerically demonstrated that absorption of a small fraction of the light intensity suffices for significant on axis current peaking and single mode operation at currents many times threshold, with minimum efficiency loss and optical mode distortion. Fabrication is implemented during the molecular beam epitaxy phase without wafer post processing, as for oxide apertures.
2008-06-01
management structure employs free- market system principles and encourages business-like processes that are mission driven. Since no operating funds are...variable (Potvin, 2007). 2. Unit Cost Goal NWCFs use the unit cost goal ( UCG ) for planning purposes. The UCG is an estimate of what a unit of product...mission 6. Will not interfere with depot performance This section opens the depot to the private market . 53 Chapter 159 – Real Property
Method of operating an oil shale kiln
Reeves, Adam A.
1978-05-23
Continuously determining the bulk density of raw and retorted oil shale, the specific gravity of the raw oil shale and the richness of the raw oil shale provides accurate means to control process variables of the retorting of oil shale, predicting oil production, determining mining strategy, and aids in controlling shale placement in the kiln for the retorting.
SMS/GOES cell and battery data analysis report
NASA Technical Reports Server (NTRS)
Armantrout, J. D.
1977-01-01
The nickel-cadmium battery design developed for the Synchronous Meteorological Satellite (SMS) and Geostationary Operational Environmental Satellite (GOES) provided background and guidelines for future development, manufacture, and application of spacecraft batteries. SMS/GOES battery design, development, qualification testing, acceptance testing, and life testing/mission performance characteristics were evaluated for correlation with battery cell manufacturing process variables.
Evaluation of Improved Pushback Forecasts Derived from Airline Ground Operations Data
NASA Technical Reports Server (NTRS)
Carr, Francis; Theis, Georg; Feron, Eric; Clarke, John-Paul
2003-01-01
Accurate and timely predictions of airline pushbacks can potentially lead to improved performance of automated decision-support tools for airport surface traffic, thus reducing the variability and average duration of costly airline delays. One factor which affects the realization of these benefits is the level of uncertainty inherent in the turn processes. To characterize this inherent uncertainty, three techniques are developed for predicting time-to-go until pushback as a function of available ground-time; elapsed ground-time; and the status (not-started/in-progress/completed) of individual turn processes (cleaning, fueling, etc.). These techniques are tested against a large and detailed dataset covering approximately l0(exp 4) real-world turn operations obtained through collaboration with Deutsche Lufthansa AG. Even after the dataset is filtered to obtain a sample of turn operations with minimal uncertainty, the standard deviation of forecast error for all three techniques is lower-bounded away from zero, indicating that turn operations have a significant stochastic component. This lower-bound result shows that decision-support tools must be designed to incorporate robust mechanisms for coping with pushback demand stochasticity, rather than treating the pushback demand process as a known deterministic input.
Enzyme reactor design under thermal inactivation.
Illanes, Andrés; Wilson, Lorena
2003-01-01
Temperature is a very relevant variable for any bioprocess. Temperature optimization of bioreactor operation is a key aspect for process economics. This is especially true for enzyme-catalyzed processes, because enzymes are complex, unstable catalysts whose technological potential relies on their operational stability. Enzyme reactor design is presented with a special emphasis on the effect of thermal inactivation. Enzyme thermal inactivation is a very complex process from a mechanistic point of view. However, for the purpose of enzyme reactor design, it has been oversimplified frequently, considering one-stage first-order kinetics of inactivation and data gathered under nonreactive conditions that poorly represent the actual conditions within the reactor. More complex mechanisms are frequent, especially in the case of immobilized enzymes, and most important is the effect of catalytic modulators (substrates and products) on enzyme stability under operation conditions. This review focuses primarily on reactor design and operation under modulated thermal inactivation. It also presents a scheme for bioreactor temperature optimization, based on validated temperature-explicit functions for all the kinetic and inactivation parameters involved. More conventional enzyme reactor design is presented merely as a background for the purpose of highlighting the need for a deeper insight into enzyme inactivation for proper bioreactor design.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
2013-06-01
Kobu, 2007) Gunasekaran and Kobu also presented six observations as they relate to these key performance indicators ( KPI ), as follows: 1...Internal business process (50% of the KPI ) and customers (50% of the KPI ) play a significant role in SC environments. This implies that internal business...process PMs have significant impact on the operational performance. 2. The most widely used PM is financial performance (38% of the KPI ). This
Epitaxial gallium arsenide wafers
NASA Technical Reports Server (NTRS)
Black, J. F.; Robinson, L. B.
1971-01-01
The preparation of GaAs epitaxial layers by a vapor transport process using AsCl3, Ga and H2 was pursued to provide epitaxial wafers suitable for the fabrication of transferred electron oscillators and amplifiers operating in the subcritical region. Both n-n(+) structures, and n(++)-n-n(+) sandwich structures were grown using n(+) (Si-doped) GaAs substrates. Process variables such as the input AsCl3 concentration, gallium temperature, and substrate temperature and temperature gradient and their effects on properties are presented and discussed.
2012-08-01
It suggests that a smart use of some a-priori information about the operating environment, when processing the received signal and designing the...random variable with the same variance of the backscattering target amplitude αT , and D ( αT , α G T ) is the Kullback − Leibler divergence, see [65...MI . Proof. See Appendix 3.6.6. Thus, we can use the optimization procedure of Algorithm 4 to optimize the Mutual Information between the target
NASA Astrophysics Data System (ADS)
Daneshmend, L. K.; Pak, H. A.
1984-02-01
On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.
Learning-based controller for biotechnology processing, and method of using
Johnson, John A.; Stoner, Daphne L.; Larsen, Eric D.; Miller, Karen S.; Tolle, Charles R.
2004-09-14
The present invention relates to process control where some of the controllable parameters are difficult or impossible to characterize. The present invention relates to process control in biotechnology of such systems, but not limited to. Additionally, the present invention relates to process control in biotechnology minerals processing. In the inventive method, an application of the present invention manipulates a minerals bioprocess to find local exterma (maxima or minima) for selected output variables/process goals by using a learning-based controller for bioprocess oxidation of minerals during hydrometallurgical processing. The learning-based controller operates with or without human supervision and works to find processor optima without previously defined optima due to the non-characterized nature of the process being manipulated.
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
Andalib, Mehran; Taher, Edris; Donohue, Joseph; Ledwell, Sam; Andersen, Mikkel H; Sangrey, Karla
2018-01-01
The reliability and accuracy of in-situ ion selective electrode and ultraviolet (NO x ) probes have been investigated at four different treatment plants with different operational conditions. This study shows that the mentioned probes tend to compromise their accuracy and trending stability at lower NO x of <1.0 mg N/L, which if used as a measuring variable for PI feedback controller for denitrification (biological reduction of nitrate to nitrogen gas), would cause overfeeding the external carbon source. In-situ Clark-type N 2 O sensors, recently introduced for industrial scale use (Unisense Environment) could potentially open a new horizon in the automation of biological processes and particularly denitrification. To demonstrate the applicability of such probes for automation, two in-situ N 2 O probes were used in two treatment plants in parallel with NO x -N probes. The effects of operational conditions such as COD/N ratios and the correlation between NO x and N 2 O were investigated at those plants. N 2 O production at non-detect dissolved oxygen concentrations and pH of 7-7.2 were found to be a function of influent nitrogen load or the ratio of COD/N INFLUENT . Finally, using an N 2 O probe as a proxy sensor for nitrates is proposed as a measured variable in the PI feedback in the automation of the denitrification process with a NO x set point of <1.2 mg N/L).
Reinforcement and Induction of Operant Variability
ERIC Educational Resources Information Center
Neuringer, Allen
2012-01-01
The target paper by Barba (2012) raises issues that were the focus of the author's first two publications on operant variability. The author will describe the main findings in those papers and then discuss Barba's specific arguments. Barba has argued against the operant nature of variability. (Contains 2 figures.)
Disentangling Global Warming, Multidecadal Variability, and El Niño in Pacific Temperatures
NASA Astrophysics Data System (ADS)
Wills, Robert C.; Schneider, Tapio; Wallace, John M.; Battisti, David S.; Hartmann, Dennis L.
2018-03-01
A key challenge in climate science is to separate observed temperature changes into components due to internal variability and responses to external forcing. Extended integrations of forced and unforced climate models are often used for this purpose. Here we demonstrate a novel method to separate modes of internal variability from global warming based on differences in time scale and spatial pattern, without relying on climate models. We identify uncorrelated components of Pacific sea surface temperature variability due to global warming, the Pacific Decadal Oscillation (PDO), and the El Niño-Southern Oscillation (ENSO). Our results give statistical representations of PDO and ENSO that are consistent with their being separate processes, operating on different time scales, but are otherwise consistent with canonical definitions. We isolate the multidecadal variability of the PDO and find that it is confined to midlatitudes; tropical sea surface temperatures and their teleconnections mix in higher-frequency variability. This implies that midlatitude PDO anomalies are more persistent than previously thought.
Apparatus and method for microwave processing of materials
Johnson, A.C.; Lauf, R.J.; Bible, D.W.; Markunas, R.J.
1996-05-28
Disclosed is a variable frequency microwave heating apparatus designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity for testing or other selected applications. The variable frequency heating apparatus is used in the method of the present invention to monitor the resonant processing frequency within the furnace cavity depending upon the material, including the state thereof, from which the workpiece is fabricated. The variable frequency microwave heating apparatus includes a microwave signal generator and a high-power microwave amplifier or a microwave voltage-controlled oscillator. A power supply is provided for operation of the high-power microwave oscillator or microwave amplifier. A directional coupler is provided for detecting the direction and amplitude of signals incident upon and reflected from the microwave cavity. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. Reflected power is dissipated in the reflected power load. 10 figs.
Key variables analysis of a novel continuous biodrying process for drying mixed sludge.
Navaee-Ardeh, Shahram; Bertrand, François; Stuart, Paul R
2010-05-01
A novel continuous biodrying process has been developed whose goal is to increase the dry solids content of the sludge to economic levels rendering it suitable for a safe and economic combustion operation in a biomass boiler. The sludge drying rates are enhanced by the metabolic bioheat produced in the matrix of mixed sludge. The goal of this study was to systematically analyze the continuous biodrying reactor. By performing a variable analysis, it was found that the outlet relative humidity profile was the key variable in the biodrying reactor. The influence of different outlet relative humidity profiles was then evaluated using biodrying efficiency index. It was found that by maintaining the air outlet relative humidity profile at 85/85/96/96% in the four compartments of the reactor, the highest biodrying efficiency index can be achieved, while economic dry solids level (>45%w/w) are guaranteed. Crown Copyright 2009. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reza, Syed Azer
This dissertation proposes the use of the emerging Micro-Electro-Mechanical Systems (MEMS) and agile lensing optical device technologies to design novel and powerful signal conditioning and sensing modules for advanced applications in optical communications, physical parameter sensing and RF/optical signal processing. For example, these new module designs have experimentally demonstrated exceptional features such as stable loss broadband operations and high > 60 dB optical dynamic range signal filtering capabilities. The first part of the dissertation describes the design and demonstration of digital MEMS-based signal processing modules for communication systems and sensor networks using the TI DLP (Digital Light Processing) technology. Examples of such modules include optical power splitters, narrowband and broadband variable fiber optical attenuators, spectral shapers and filters. Compared to prior works, these all-digital designs have advantages of repeatability, accuracy, and reliability that are essential for advanced communications and sensor applications. The next part of the dissertation proposes, analyzes and demonstrates the use of analog opto-fluidic agile lensing technology for sensor networks and test and measurement systems. Novel optical module designs for distance sensing, liquid level sensing, three-dimensional object shape sensing and variable photonic delay lines are presented and experimentally demonstrated. Compared to prior art module designs, the proposed analog-mode modules have exceptional performances, particularly for extreme environments (e.g., caustic liquids) where the free-space agile beam-based sensor provide remote non-contact access for physical sensing operations. The dissertation also presents novel modules involving hybrid analog-digital photonic designs that make use of the different optical device technologies to deliver the best features of both analog and digital optical device operations and controls. Digital controls are achieved through the use of the digital MEMS technology and analog controls are realized by employing opto-fluidic agile lensing technology and acousto-optic technology. For example, variable fiber-optic attenuators and spectral filters are proposed using the hybrid design. Compared to prior art module designs, these hybrid designs provide a higher module dynamic range and increased resolution that are critical in various advanced system applications. In summary, the dissertation shows the added power of hybrid optical designs using both the digital and analog photonic signal processing versus just all-digital or all-analog module designs.
Bovee, Ken D.; Waddle, Terry J.; Talbert, Colin; Hatten, James R.; Batt, Thomas R.
2008-01-01
The Yakima River Decision Support System (YRDSS) was designed to quantify and display the consequences of different water management scenarios for a variety of state variables in the upper Yakima River Basin, located in central Washington. The impetus for the YRDSS was the Yakima River Basin Water Storage Feasibility Study, which investigated alternatives for providing additional water in the basin for threatened and endangered fish, irrigated agriculture, and municipal water supply. The additional water supplies would be provided by combinations of water exchanges, pumping stations, and off-channel storage facilities, each of which could affect the operations of the Bureau of Reclamation's (BOR) five headwaters reservoirs in the basin. The driver for the YRDSS is RiverWare, a systems-operations model used by BOR to calculate reservoir storage, irrigation deliveries, and streamflow at downstream locations resulting from changes in water supply and reservoir operations. The YRDSS uses output from RiverWare to calculate and summarize changes at 5 important flood plain reaches in the basin to 14 state variables: (1) habitat availability for selected life stages of four salmonid species, (2) spawning-incubation habitat persistence, (3) potential redd scour, (4) maximum water temperatures, (5) outmigration for bull trout (Salvelinus confluentus) from headwaters reservoirs, (6) outmigration of salmon smolts from Cle Elum Reservoir, (7) frequency of beneficial overbank flooding, (8) frequency of damaging flood events, (9) total deliverable water supply, (10) total water supply deliverable to junior water rights holders, (11) end-of-year reservoir carryover, (12) potential fine sediment transport rates, (13) frequency of events capable of armor layer disruption, and (14) geomorphic work performed during each water year. Output of the YRDSS consists of a series of conditionally formatted scoring tables, wherein the changes to a state variable resulting from an operational scenario are compiled and summarized. Increases in the values for state variables result in their respective backgrounds to turn green in the scoring matrix, whereas decreases in the values for state variables result in their respective backgrounds turning red. This convention was designed to provide decision makers with a quick visual assessment of the overall results of an operating scenario. An evaluation matrix and a variety of weighting strategies to reflect the relative importance of different state variables are also presented as options for further distillation of YRDSS results during the decision-making process.
Statistical quality control through overall vibration analysis
NASA Astrophysics Data System (ADS)
Carnero, M. a. Carmen; González-Palma, Rafael; Almorza, David; Mayorga, Pedro; López-Escobar, Carlos
2010-05-01
The present study introduces the concept of statistical quality control in automotive wheel bearings manufacturing processes. Defects on products under analysis can have a direct influence on passengers' safety and comfort. At present, the use of vibration analysis on machine tools for quality control purposes is not very extensive in manufacturing facilities. Noise and vibration are common quality problems in bearings. These failure modes likely occur under certain operating conditions and do not require high vibration amplitudes but relate to certain vibration frequencies. The vibration frequencies are affected by the type of surface problems (chattering) of ball races that are generated through grinding processes. The purpose of this paper is to identify grinding process variables that affect the quality of bearings by using statistical principles in the field of machine tools. In addition, an evaluation of the quality results of the finished parts under different combinations of process variables is assessed. This paper intends to establish the foundations to predict the quality of the products through the analysis of self-induced vibrations during the contact between the grinding wheel and the parts. To achieve this goal, the overall self-induced vibration readings under different combinations of process variables are analysed using statistical tools. The analysis of data and design of experiments follows a classical approach, considering all potential interactions between variables. The analysis of data is conducted through analysis of variance (ANOVA) for data sets that meet normality and homoscedasticity criteria. This paper utilizes different statistical tools to support the conclusions such as chi squared, Shapiro-Wilks, symmetry, Kurtosis, Cochran, Hartlett, and Hartley and Krushal-Wallis. The analysis presented is the starting point to extend the use of predictive techniques (vibration analysis) for quality control. This paper demonstrates the existence of predictive variables (high-frequency vibration displacements) that are sensible to the processes setup and the quality of the products obtained. Based on the result of this overall vibration analysis, a second paper will analyse self-induced vibration spectrums in order to define limit vibration bands, controllable every cycle or connected to permanent vibration-monitoring systems able to adjust sensible process variables identified by ANOVA, once the vibration readings exceed established quality limits.
NASA Astrophysics Data System (ADS)
Jackson, Michael; Blatt, Stephan; Holub, Kirk
2015-04-01
In April of 2014, NOAA/OAR/ESRL Global Systems Division (GSD) and Trimble, in collaboration with Earth Networks, Inc. (ENI) signed a Cooperative Research and Development Agreement (CRADA) to transfer the existing NOAA GPS-Met Data Acquisition and Processing System (GPS-Met DAPS) technology to a commercial Trimble/ENI partnership. NOAA's GPS-Met DAPS is currently operated in a pseudo-operational mode but has proven highly reliable and running at over 95% uptime. The DAPS uses the GAMIT software to ingest dual frequency carrier phase GPS/GNSS observations and ancillary information such as real-time satellite orbits to estimate the zenith-scaled tropospheric (ZTD) signal delays and, where surface MET data are available, retrieve integrated precipitable water vapor (PWV). The NOAA data and products are made available to end users in near real-time. The Trimble/ENI partnership will use the Trimble Pivot™ software with the Atmosphere App to calculate zenith tropospheric (ZTD), tropospheric slant delay, and integrated precipitable water vapor (PWV). Evaluation of the Trimble software is underway starting with a comparison of ZTD and PWV values determined from four sub networks of GPS stations located 1. near NOAA Radiosonde Observation (Upper-Air Observation) launch sites; 2. Stations with low terrain/high moisture variability (Gulf Coast); 3. Stations with high terrain/low moisture variability (Southern California); and 4. Stations with high terrain/high moisture variability (high terrain variability elev. > 1000m). For each network GSD and T/ENI run the same stations for 30 days, compare results, and perform an evaluation of the long-term solution accuracy, precision and reliability. Metrics for success include T/ENI PWV estimates within 1.5 mm of ESRL/GSD's estimates 95% of the time (ZTD uncertainty of less than 10 mm 95% of the time). The threshold for allowable variations in ZTD between NOAA GPS-Met and T/ENI processing are 10mm. The CRADA 1&2 Trimble processing show a variation of 4±2mm and 3±8mm respectively. The threshold for allowable variations in PWV between NOAA GPS-Met and T/ENI processing are 15mm. The CRADA 1&2 Trimble processing show a variation of 2±4mm and 10±13 respectively. The T/ENI PWV and ZTD values meet and exceed the requirements outlined in the CRADA for the first two networks processed. T/ENI Partnership brings a footprint of GNSS and meteorological stations that could significantly enhance the latency, temporal, and geographic density of ZTD and PWV over the US and Europe. We will provide a brief overview of the Trimble Pivot™ software and the Atmosphere App and present results from further testing along with a timeline for the transition of the GPS-Met DAPS to an operational commercial service.
Automatic Processing of Reactive Polymers
NASA Technical Reports Server (NTRS)
Roylance, D.
1985-01-01
A series of process modeling computer codes were examined. The codes use finite element techniques to determine the time-dependent process parameters operative during nonisothermal reactive flows such as can occur in reaction injection molding or composites fabrication. The use of these analytical codes to perform experimental control functions is examined; since the models can determine the state of all variables everywhere in the system, they can be used in a manner similar to currently available experimental probes. A small but well instrumented reaction vessel in which fiber-reinforced plaques are cured using computer control and data acquisition was used. The finite element codes were also extended to treat this particular process.
PVD thermal barrier coating applications and process development for aircraft engines
NASA Astrophysics Data System (ADS)
Rigney, D. V.; Viguie, R.; Wortman, D. J.; Skelly, D. W.
1997-06-01
Thermal barrier coatings (TBCs) have been developed for application to aircraft engine components to improve service life in an increasingly hostile thermal environment. The choice of TBC type is related to the component, intended use, and economics. Selection of electron beam physical vapor deposition proc-essing for turbine blade is due in part to part size, surface finish requirements, thickness control needs, and hole closure issues. Process development of PVD TBCs has been carried out at several different sites, including GE Aircraft Engines (GEAE). The influence of processing variables on microstructure is dis-cussed, along with the GEAE development coater and initial experiences of pilot line operation.
STATISTICAL ANALYSIS OF SNAP 10A THERMOELECTRIC CONVERTER ELEMENT PROCESS DEVELOPMENT VARIABLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitch, S.H.; Morris, J.W.
1962-12-15
Statistical analysis, primarily analysis of variance, was applied to evaluate several factors involved in the development of suitable fabrication and processing techniques for the production of lead telluride thermoelectric elements for the SNAP 10A energy conversion system. The analysis methods are described as to their application for determining the effects of various processing steps, estabIishing the value of individual operations, and evaluating the significance of test results. The elimination of unnecessary or detrimental processing steps was accomplished and the number of required tests was substantially reduced by application of these statistical methods to the SNAP 10A production development effort. (auth)
Generalizing Landauer's principle
NASA Astrophysics Data System (ADS)
Maroney, O. J. E.
2009-03-01
In a recent paper [Stud. Hist. Philos. Mod. Phys. 36, 355 (2005)] it is argued that to properly understand the thermodynamics of Landauer’s principle it is necessary to extend the concept of logical operations to include indeterministic operations. Here we examine the thermodynamics of such operations in more detail, extending the work of Landauer to include indeterministic operations and to include logical states with variable entropies, temperatures, and mean energies. We derive the most general statement of Landauer’s principle and prove its universality, extending considerably the validity of previous proofs. This confirms conjectures made that all logical operations may, in principle, be performed in a thermodynamically reversible fashion, although logically irreversible operations would require special, practically rather difficult, conditions to do so. We demonstrate a physical process that can perform any computation without work requirements or heat exchange with the environment. Many widespread statements of Landauer’s principle are shown to be special cases of our generalized principle.
On-line identification of fermentation processes for ethanol production.
Câmara, M M; Soares, R M; Feital, T; Naomi, P; Oki, S; Thevelein, J M; Amaral, M; Pinto, J C
2017-07-01
A strategy for monitoring fermentation processes, specifically, simultaneous saccharification and fermentation (SSF) of corn mash, was developed. The strategy covered the development and use of first principles, semimechanistic and unstructured process model based on major kinetic phenomena, along with mass and energy balances. The model was then used as a reference model within an identification procedure capable of running on-line. The on-line identification procedure consists on updating the reference model through the estimation of corrective parameters for certain reaction rates using the most recent process measurements. The strategy makes use of standard laboratory measurements for sugars quantification and in situ temperature and liquid level data. The model, along with the on-line identification procedure, has been tested against real industrial data and have been able to accurately predict the main variables of operational interest, i.e., state variables and its dynamics, and key process indicators. The results demonstrate that the strategy is capable of monitoring, in real time, this complex industrial biomass fermentation. This new tool provides a great support for decision-making and opens a new range of opportunities for industrial optimization.
Plot-scale field experiment of surface hydrologic processes with EOS implications
NASA Technical Reports Server (NTRS)
Laymon, Charles A.; Macari, Emir J.; Costes, Nicholas C.
1992-01-01
Plot-scale hydrologic field studies were initiated at NASA Marshall Space Flight Center to a) investigate the spatial and temporal variability of surface and subsurface hydrologic processes, particularly as affected by vegetation, and b) develop experimental techniques and associated instrumentation methodology to study hydrologic processes at increasingly large spatial scales. About 150 instruments, most of which are remotely operated, have been installed at the field site to monitor ground atmospheric conditions, precipitation, interception, soil-water status, and energy flux. This paper describes the nature of the field experiment, instrumentation and sampling rationale, and presents preliminary findings.
Mechanisms and kinetics of cellulose fermentation for protein production
NASA Technical Reports Server (NTRS)
Dunlap, C. A.
1971-01-01
The development of a process (and ancillary processing and analytical techniques) to produce bacterial single-cell protein of good nutritional quality from waste cellulose is discussed. A fermentation pilot plant and laboratory were developed and have been in operation for about two years. Single-cell protein (SCP) can be produced from sugarcane bagasse--a typical agricultural cellulosic waste. The optimization and understanding of this process and its controlling variables are examined. Both batch and continuous fermentation runs have been made under controlled conditions in the 535 liter pilot plant vessel and in the laboratory 14-liter fermenters.
Enhancement of activated sludge disintegration and dewaterability by Fenton process
NASA Astrophysics Data System (ADS)
Heng, G. C.; Isa, M. H.
2016-06-01
Municipal and industrial wastewater treatment plants produce large amounts of sludge. This excess sludge is an inevitable drawback inherent to the activated sludge process. In this study, the waste activated sludge was obtained from the campus wastewater treatment plant at Universiti Teknologi PETRONAS (UTP), Malaysia. Fenton pretreatment was optimized by using the response surface methodology (RSM) to study the effects of three operating conditions including the dosage of H2O2 (g H2O2/kg TS), the molar ratio of H2O2/Fe2+ and reaction time. The optimum operating variables to achieve MLVSS removal 65%, CST reduction 28%, sCOD 11000 mg/L and EPS 500 mg/L were: 1000 g H2O2/kg TS, H2O2/Fe2+ molar ratio 70 and reaction time 45 min. Fenton process was proved to be able to enhance the sludge disintegration and dewaterability.
Climate Observing Systems: Where are we and where do we need to be in the future
NASA Astrophysics Data System (ADS)
Baker, B.; Diamond, H. J.
2017-12-01
Climate research and monitoring requires an observational strategy that blends long-term, carefully calibrated measurements as well as short-term, focused process studies. The operation and implementation of operational climate observing networks and the provision of related climate services, both have a significant role to play in assisting the development of national climate adaptation policies and in facilitating national economic development. Climate observing systems will require a strong research element for a long time to come. This requires improved observations of the state variables and the ability to set them in a coherent physical (as well as a chemical and biological) framework with models. Climate research and monitoring requires an integrated strategy of land/ocean/atmosphere observations, including both in situ and remote sensing platforms, and modeling and analysis. It is clear that we still need more research and analysis on climate processes, sampling strategies, and processing algorithms.
Switching between simple cognitive tasks: the interaction of top-down and bottom-up factors
NASA Technical Reports Server (NTRS)
Ruthruff, E.; Remington, R. W.; Johnston, J. C.
2001-01-01
How do top-down factors (e.g., task expectancy) and bottom-up factors (e.g., task recency) interact to produce an overall level of task readiness? This question was addressed by factorially manipulating task expectancy and task repetition in a task-switching paradigm. The effects of expectancy and repetition on response time tended to interact underadditively, but only because the traditional binary task-repetition variable lumps together all switch trials, ignoring variation in task lag. When the task-recency variable was scaled continuously, all 4 experiments instead showed additivity between expectancy and recency. The results indicated that expectancy and recency influence different stages of mental processing. One specific possibility (the configuration-execution model) is that task expectancy affects the time required to configure upcoming central operations, whereas task recency affects the time required to actually execute those central operations.
Artificial neural networks for the performance prediction of heat pump hot water heaters
NASA Astrophysics Data System (ADS)
Mathioulakis, E.; Panaras, G.; Belessiotis, V.
2018-02-01
The rapid progression in the use of heat pumps, due to the decrease in the equipment cost, together with the favourable economics of the consumed electrical energy, has been combined with the wide dissemination of air-to-water heat pumps (AWHPs) in the residential sector. The entrance of the respective systems in the commercial sector has made important the modelling of the processes. In this work, the suitability of artificial neural networks (ANN) in the modelling of AWHPs is investigated. The ambient air temperature in the evaporator inlet and the water temperature in the condenser inlet have been selected as the input variables; energy performance indices and quantities characterising the operation of the system have been selected as output variables. The results verify that the, easy-to-implement, trained ANN can represent an effective tool for the prediction of the AWHP performance in various operation conditions and the parametrical investigation of their behaviour.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
Divergence-free approach for obtaining decompositions of quantum-optical processes
NASA Astrophysics Data System (ADS)
Sabapathy, K. K.; Ivan, J. S.; García-Patrón, R.; Simon, R.
2018-02-01
Operator-sum representations of quantum channels can be obtained by applying the channel to one subsystem of a maximally entangled state and deploying the channel-state isomorphism. However, for continuous-variable systems, such schemes contain natural divergences since the maximally entangled state is ill defined. We introduce a method that avoids such divergences by utilizing finitely entangled (squeezed) states and then taking the limit of arbitrary large squeezing. Using this method, we derive an operator-sum representation for all single-mode bosonic Gaussian channels where a unique feature is that both quantum-limited and noisy channels are treated on an equal footing. This technique facilitates a proof that the rank-1 Kraus decomposition for Gaussian channels at its respective entanglement-breaking thresholds, obtained in the overcomplete coherent-state basis, is unique. The methods could have applications to simulation of continuous-variable channels.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Schmidt, C. C.; Hoffman, J.; Giglio, L.; Peterson, D. A.
2013-12-01
Polar and geostationary satellites are used operationally for fire detection and smoke source estimation by many near-real-time operational users, including operational forecast centers around the globe. The input satellite radiance data are processed by data providers to produce Level-2 and Level -3 fire detection products, but processing these data into spatially and temporally consistent estimates of fire activity requires a substantial amount of additional processing. The most significant processing steps are correction for variable coverage of the satellite observations, and correction for conditions that affect the detection efficiency of the satellite sensors. We describe a system developed by the Naval Research Laboratory (NRL) that uses the full raster information from the entire constellation to diagnose detection opportunities, calculate corrections for factors such as angular dependence of detection efficiency, and generate global estimates of fire activity at spatial and temporal scales suitable for atmospheric modeling. By incorporating these improved fire observations, smoke emissions products, such as NRL's FLAMBE, are able to produce improved estimates of global emissions. This talk provides an overview of the system, demonstrates the achievable improvement over older methods, and describes challenges for near-real-time implementation.
Ashrafi, Omid; Yerushalmi, Laleh; Haghighat, Fariborz
2013-03-01
Greenhouse gas (GHG) emission in wastewater treatment plants of the pulp-and-paper industry was estimated by using a dynamic mathematical model. Significant variations were shown in the magnitude of GHG generation in response to variations in operating parameters, demonstrating the limited capacity of steady-state models in predicting the time-dependent emissions of these harmful gases. The examined treatment systems used aerobic, anaerobic, and hybrid-anaerobic/aerobic-biological processes along with chemical coagulation/flocculation, anaerobic digester, nitrification and denitrification processes, and biogas recovery. The pertinent operating parameters included the influent substrate concentration, influent flow rate, and temperature. Although the average predictions by the dynamic model were only 10 % different from those of steady-state model during 140 days of operation of the examined systems, the daily variations of GHG emissions were different up to ± 30, ± 19, and ± 17 % in the aerobic, anaerobic, and hybrid systems, respectively. The variations of process variables caused fluctuations in energy generation from biogas recovery by ± 6, ± 7, and ± 4 % in the three examined systems, respectively. The lowest variations were observed in the hybrid system, showing the stability of this particular process design.
A question driven socio-hydrological modeling process
NASA Astrophysics Data System (ADS)
Garcia, M.; Portney, K.; Islam, S.
2016-01-01
Human and hydrological systems are coupled: human activity impacts the hydrological cycle and hydrological conditions can, but do not always, trigger changes in human systems. Traditional modeling approaches with no feedback between hydrological and human systems typically cannot offer insight into how different patterns of natural variability or human-induced changes may propagate through this coupled system. Modeling of coupled human-hydrological systems, also called socio-hydrological systems, recognizes the potential for humans to transform hydrological systems and for hydrological conditions to influence human behavior. However, this coupling introduces new challenges and existing literature does not offer clear guidance regarding model conceptualization. There are no universally accepted laws of human behavior as there are for the physical systems; furthermore, a shared understanding of important processes within the field is often used to develop hydrological models, but there is no such consensus on the relevant processes in socio-hydrological systems. Here we present a question driven process to address these challenges. Such an approach allows modeling structure, scope and detail to remain contingent on and adaptive to the question context. We demonstrate the utility of this process by revisiting a classic question in water resources engineering on reservoir operation rules: what is the impact of reservoir operation policy on the reliability of water supply for a growing city? Our example model couples hydrological and human systems by linking the rate of demand decreases to the past reliability to compare standard operating policy (SOP) with hedging policy (HP). The model shows that reservoir storage acts both as a buffer for variability and as a delay triggering oscillations around a sustainable level of demand. HP reduces the threshold for action thereby decreasing the delay and the oscillation effect. As a result, per capita demand decreases during periods of water stress are more frequent but less drastic and the additive effect of small adjustments decreases the tendency of the system to overshoot available supplies. This distinction between the two policies was not apparent using a traditional noncoupled model.
Wind Plant Performance Prediction (WP3) Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, Anna
The methods for analysis of operational wind plant data are highly variable across the wind industry, leading to high uncertainties in the validation and bias-correction of preconstruction energy estimation methods. Lack of credibility in the preconstruction energy estimates leads to significant impacts on project financing and therefore the final levelized cost of energy for the plant. In this work, the variation in the evaluation of a wind plant's operational energy production as a result of variations in the processing methods applied to the operational data is examined. Preliminary results indicate that selection of the filters applied to the data andmore » the filter parameters can have significant impacts in the final computed assessment metrics.« less
DNET: A communications facility for distributed heterogeneous computing
NASA Technical Reports Server (NTRS)
Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.
1989-01-01
This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.
NASA Astrophysics Data System (ADS)
Nugraha, M. G.; Utari, S.; Saepuzaman, D.; Nugraha, F.
2018-05-01
Scientific process skills (SPS) are an intellectual skill to build knowledge, solve problems scientifically, train thinking skills as well as a very important part of the inquiry process and contribute to scientific literacy. Therefore, SPS is very important to be developed. This study aims to develop Student Worksheets (SW) that can trace SPS through basic physics experiments (BPE) on Melde’s law. This research uses R&D method involving 18 physics education department students who take the BPE course as a sample. The research instrument uses an SW designed with a SPS approach that have been reviewed and judged by expert, which includes observing, communicating, classifying, measuring, inferring, predicting, identifying variable, constructing hypothesis, defining variable operationally, designing experiment, acquiring and processing data to conclusions. The result of the research shows that the student’s SPS has not been trained optimally, the students’ answers are not derived from the observations and experiments conducted but derived from the initial knowledge of the students, as well as in the determination of experimental variables, inferring and hypothesis. This result is also supported by a low increase of conceptual content on Melde’s law with n-gain of 0.40. The research findings are used as the basis for the redesign of SW.
Chrestenson transform FPGA embedded factorizations.
Corinthios, Michael J
2016-01-01
Chrestenson generalized Walsh transform factorizations for parallel processing imbedded implementations on field programmable gate arrays are presented. This general base transform, sometimes referred to as the Discrete Chrestenson transform, has received special attention in recent years. In fact, the Discrete Fourier transform and Walsh-Hadamard transform are but special cases of the Chrestenson generalized Walsh transform. Rotations of a base-p hypercube, where p is an arbitrary integer, are shown to produce dynamic contention-free memory allocation, in processor architecture. The approach is illustrated by factorizations involving the processing of matrices of the transform which are function of four variables. Parallel operations are implemented matrix multiplications. Each matrix, of dimension N × N, where N = p (n) , n integer, has a structure that depends on a variable parameter k that denotes the iteration number in the factorization process. The level of parallelism, in the form of M = p (m) processors can be chosen arbitrarily by varying m between zero to its maximum value of n - 1. The result is an equation describing the generalised parallelism factorization as a function of the four variables n, p, k and m. Applications of the approach are shown in relation to configuring field programmable gate arrays for digital signal processing applications.
Treatment of leachate by electrocoagulation using aluminum and iron electrodes.
Ilhan, Fatih; Kurt, Ugur; Apaydin, Omer; Gonullu, M Talha
2008-06-15
In this paper, treatment of leachate by electrocoagulation (EC) has been investigated in a batch process. The sample of leachate was supplied from Odayeri Landfill Site in Istanbul. Firstly, EC was compared with classical chemical coagulation (CC) process via COD removal. The first comparison results with 348 A/m2 current density showed that EC process has higher treatment performance than CC process. Secondly, effects of process variables such as electrode material, current density (from 348 to 631 A/m2), pH, treatment cost, and operating time for EC process are investigated on COD and NH4-N removal efficiencies. The appropriate electrode type search for EC provided that aluminum supplies more COD removal (56%) than iron electrode (35%) at the end of the 30 min operating time. Finally, EC experiments were also continued to determine the efficiency of ammonia removal, and the effects of current density, mixing, and aeration. All the findings of the study revealed that treatment of leachate by EC can be used as a step of a joint treatment.
EnergySolution's Clive Disposal Facility Operational Research Model - 13475
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nissley, Paul; Berry, Joanne
2013-07-01
EnergySolutions owns and operates a licensed, commercial low-level radioactive waste disposal facility located in Clive, Utah. The Clive site receives low-level radioactive waste from various locations within the United States via bulk truck, containerised truck, enclosed truck, bulk rail-cars, rail boxcars, and rail inter-modals. Waste packages are unloaded, characterized, processed, and disposed of at the Clive site. Examples of low-level radioactive waste arriving at Clive include, but are not limited to, contaminated soil/debris, spent nuclear power plant components, and medical waste. Generators of low-level radioactive waste typically include nuclear power plants, hospitals, national laboratories, and various United States government operatedmore » waste sites. Over the past few years, poor economic conditions have significantly reduced the number of shipments to Clive. With less revenue coming in from processing shipments, Clive needed to keep its expenses down if it was going to maintain past levels of profitability. The Operational Research group of EnergySolutions were asked to develop a simulation model to help identify any improvement opportunities that would increase overall operating efficiency and reduce costs at the Clive Facility. The Clive operations research model simulates the receipt, movement, and processing requirements of shipments arriving at the facility. The model includes shipment schedules, processing times of various waste types, labor requirements, shift schedules, and site equipment availability. The Clive operations research model has been developed using the WITNESS{sup TM} process simulation software, which is developed by the Lanner Group. The major goals of this project were to: - identify processing bottlenecks that could reduce the turnaround time from shipment arrival to disposal; - evaluate the use (or idle time) of labor and equipment; - project future operational requirements under different forecasted scenarios. By identifying processing bottlenecks and unused equipment and/or labor, improvements to operating efficiency could be determined and appropriate cost saving measures implemented. Model runs forecasting various scenarios helped illustrate potential impacts of certain conditions (e.g. 20% decrease in shipments arrived), variables (e.g. 20% decrease in labor), or other possible situations. (authors)« less
A method for developing outcome measures in the clinical laboratory.
Jones, J
1996-01-01
Measuring and reporting outcomes in health care is becoming more important for quality assessment, utilization assessment, accreditation standards, and negotiating contracts in managed care. How does one develop an outcome measure for the laboratory to assess the value of the services? A method is described which outlines seven steps in developing outcome measures for a laboratory service or process. These steps include the following: 1. Identify the process or service to be monitored for performance and outcome assessment. 2. If necessary, form an multidisciplinary team of laboratory staff, other department staff, physicians, and pathologists. 3. State the purpose of the test or service including a review of published data for the clinical pathological correlation. 4. Prepare a process cause and effect diagram including steps critical to the outcome. 5. Identify key process variables that contribute to positive or negative outcomes. 6. Identify outcome measures that are not process measures. 7. Develop an operational definition, identify data sources, and collect data. Examples, including a process cause and effect diagram, process variables, and outcome measures, are given using the Therapeutic Drug Monitoring service (TDM). A summary of conclusions and precautions for outcome measurement is then provided.
Zhang, Hang; Xu, Qingyan; Liu, Baicheng
2014-01-01
The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535
Silva, A F; Sarraguça, M C; Fonteyne, M; Vercruysse, J; De Leersnyder, F; Vanhoorne, V; Bostijn, N; Verstraeten, M; Vervaet, C; Remon, J P; De Beer, T; Lopes, J A
2017-08-07
A multivariate statistical process control (MSPC) strategy was developed for the monitoring of the ConsiGma™-25 continuous tablet manufacturing line. Thirty-five logged variables encompassing three major units, being a twin screw high shear granulator, a fluid bed dryer and a product control unit, were used to monitor the process. The MSPC strategy was based on principal component analysis of data acquired under normal operating conditions using a series of four process runs. Runs with imposed disturbances in the dryer air flow and temperature, in the granulator barrel temperature, speed and liquid mass flow and in the powder dosing unit mass flow were utilized to evaluate the model's monitoring performance. The impact of the imposed deviations to the process continuity was also evaluated using Hotelling's T 2 and Q residuals statistics control charts. The influence of the individual process variables was assessed by analyzing contribution plots at specific time points. Results show that the imposed disturbances were all detected in both control charts. Overall, the MSPC strategy was successfully developed and applied. Additionally, deviations not associated with the imposed changes were detected, mainly in the granulator barrel temperature control. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zheng, Zhongchao; Seto, Tatsuru; Kim, Sanghong; Kano, Manabu; Fujiwara, Toshiyuki; Mizuta, Masahiko; Hasebe, Shinji
2018-06-01
The Czochralski (CZ) process is the dominant method for manufacturing large cylindrical single-crystal ingots for the electronics industry. Although many models and control methods for the CZ process have been proposed, they were only tested with small equipment and only a few industrial application were reported. In this research, we constructed a first-principle model for controlling industrial CZ processes that produce 300 mm single-crystal silicon ingots. The developed model, which consists of energy, mass balance, hydrodynamic, and geometrical equations, calculates the crystal radius and the crystal growth rate as output variables by using the heater input, the crystal pulling rate, and the crucible rise rate as input variables. To improve accuracy, we modeled the CZ process by considering factors such as changes in the positions of the crucible and the melt level. The model was validated with the operation data from an industrial 300 mm CZ process. We compared the calculated and actual values of the crystal radius and the crystal growth rate, and the results demonstrated that the developed model simulated the industrial process with high accuracy.
Ratcliffe, Elizabeth; Hourd, Paul; Guijarro-Leach, Juan; Rayment, Erin; Williams, David J; Thomas, Robert J
2013-01-01
Commercial regenerative medicine will require large quantities of clinical-specification human cells. The cost and quality of manufacture is notoriously difficult to control due to highly complex processes with poorly defined tolerances. As a step to overcome this, we aimed to demonstrate the use of 'quality-by-design' tools to define the operating space for economic passage of a scalable human embryonic stem cell production method with minimal cell loss. Design of experiments response surface methodology was applied to generate empirical models to predict optimal operating conditions for a unit of manufacture of a previously developed automatable and scalable human embryonic stem cell production method. Two models were defined to predict cell yield and cell recovery rate postpassage, in terms of the predictor variables of media volume, cell seeding density, media exchange and length of passage. Predicted operating conditions for maximized productivity were successfully validated. Such 'quality-by-design' type approaches to process design and optimization will be essential to reduce the risk of product failure and patient harm, and to build regulatory confidence in cell therapy manufacturing processes.
Design of experiments applications in bioprocessing: concepts and approach.
Kumar, Vijesh; Bhalla, Akriti; Rathore, Anurag S
2014-01-01
Most biotechnology unit operations are complex in nature with numerous process variables, feed material attributes, and raw material attributes that can have significant impact on the performance of the process. Design of experiments (DOE)-based approach offers a solution to this conundrum and allows for an efficient estimation of the main effects and the interactions with minimal number of experiments. Numerous publications illustrate application of DOE towards development of different bioprocessing unit operations. However, a systematic approach for evaluation of the different DOE designs and for choosing the optimal design for a given application has not been published yet. Through this work we have compared the I-optimal and D-optimal designs to the commonly used central composite and Box-Behnken designs for bioprocess applications. A systematic methodology is proposed for construction of the model and for precise prediction of the responses for the three case studies involving some of the commonly used unit operations in downstream processing. Use of Akaike information criterion for model selection has been examined and found to be suitable for the applications under consideration. © 2013 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Li, Zehua; Hao, Zhenchun; Shi, Xiaogang; Déry, Stephen J.; Li, Jieyou; Chen, Sichun; Li, Yongkun
2016-08-01
To help the decision making process and reduce climate change impacts, hydrologically-based drought indices have been used to determine drought severity in the Tarim River Basin (TRB) over the past decades. As the major components of the surface water balance, however, the irrigation process and reservoir operations have not been incorporated into drought indices in previous studies. Therefore, efforts are needed to develop a new agricultural drought index, which is based on the Variable Infiltration Capacity (VIC) model coupled with an irrigation scheme and a reservoir module. The new drought index was derived from the simulated soil moisture data from a retrospective VIC simulation from 1961 to 2007 over the irrigated area in the TRB. The physical processes in the coupled VIC model allow the new agricultural drought index to take into account a wide range of hydrologic processes including the irrigation process and reservoir operations. Notably, the irrigation process was found to dominate the surface water balance and drought evolution in the TRB. Furthermore, the drought conditions identified by the new agricultural drought index presented a good agreement with the historical drought events that occurred in 1993-94, 2004, and 2006-07, respectively. Moreover, the spatial distribution of coupled VIC model outputs using the new drought index provided detailed information about where and to what extent droughts occurred.
Fayed, Mohamed H; Abdel-Rahman, Sayed I; Alanazi, Fars K; Ahmed, Mahrous O; Tawfeek, Hesham M; Al-Shedfat, Ramadan I
2017-01-01
Application of quality by design (QbD) in high shear granulation process is critical and need to recognize the correlation between the granulation process parameters and the properties of intermediate (granules) and corresponding final product (tablets). The present work examined the influence of water amount (X,) and wet massing time (X2) as independent process variables on the critical quality attributes of granules and corresponding tablets using design of experiment (DoE) technique. A two factor, three level (32) full factorial design was performed; each of these variables was investigated at three levels to characterize their strength and interaction. The dried granules have been analyzed for their size distribution, density and flow pattern. Additionally, the produced tablets have been investigated for weight uniformity, crushing strength, friability and percent capping, disintegration time and drug dissolution. Statistically significant impact (p < 0.05) of water amount was identified for granule growth, percent fines and distribution width and flow behavior. Granule density and compressibility were found to be significantly influenced (p < 0.05) by the two operating conditions. Also, water amount has significant effect (p < 0.05) on tablet weight unifornity, friability and percent capping. Moreover, tablet disintegration time and drug dissolution appears to be significantly influenced (p < 0.05) by the two process variables. On the other hand, the relationship of process parameters with critical quality attributes of granule and final product tablet was identified and correlated. Ultimately, a judicious selection of process parameters in high shear granulation process will allow providing product of desirable quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, Nicholas A.; O'Connor, Ben L.; Hayse, John W.
2014-07-22
Environmental flows are an important consideration in licensing hydropower projects because operational flow releases can result in adverse conditions to downstream ecological communities. Flow variability assessments have typically focused on pre- and post-dam conditions using metrics based on daily-averaged flow values. This study used subdaily and daily flow data to assess environmental flow response to changes in hydropower operations from daily-peaking to run-of-river. An analysis tool was developed to quantify subdaily to seasonal flow variability metrics and was applied to four hydropower projects that underwent operational changes based on regulatory requirements. Results indicate that the distribution of flows is significantly different between daily-peaking and run-of- river operations and that daily-peaking operations are flashier than run-of-river operations; these differences are seen using hourly-averaged flow datasets and are less pronounced or not noticeable using daily-averaged flow datasets. Of all variability metrics analyzed, hydrograph rise and fall rates were the most sensitive to using daily versus subdaily flow data. This outcome has implications for the development of flow-ecology relationships that quantify effects of rate of change on processes such as fish stranding and displacement, along with habitat stability. The quantification of flow variability statistics should be done using subdaily datasets and metric to accurately represent the nature of hydropower operations , especially for facilities that utilize daily-peaking operations.
Oviedo-Ocaña, E R; Torres-Lozada, P; Marmolejo-Rebellon, L F; Torres-López, W A; Dominguez, I; Komilis, D; Sánchez, A
2017-04-01
Biowaste is commonly the largest fraction of municipal solid waste (MSW) in developing countries. Although composting is an effective method to treat source separated biowaste (SSB), there are certain limitations in terms of operation, partly due to insufficient control to the variability of SSB quality, which affects process kinetics and product quality. This study assesses the variability of the SSB physicochemical quality in a composting facility located in a small town of Colombia, in which SSB collection was performed twice a week. Likewise, the influence of the SSB physicochemical variability on the variability of compost parameters was assessed. Parametric and non-parametric tests (i.e. Student's t-test and the Mann-Whitney test) showed no significant differences in the quality parameters of SSB among collection days, and therefore, it was unnecessary to establish specific operation and maintenance regulations for each collection day. Significant variability was found in eight of the twelve quality parameters analyzed in the inlet stream, with corresponding coefficients of variation (CV) higher than 23%. The CVs for the eight parameters analyzed in the final compost (i.e. pH, moisture, total organic carbon, total nitrogen, C/N ratio, total phosphorus, total potassium and ash) ranged from 9.6% to 49.4%, with significant variations in five of those parameters (CV>20%). The above indicate that variability in the inlet stream can affect the variability of the end-product. Results suggest the need to consider variability of the inlet stream in the performance of composting facilities to achieve a compost of consistent quality. Copyright © 2017 Elsevier Ltd. All rights reserved.
The application of statistically designed experiments to resistance spot welding
NASA Technical Reports Server (NTRS)
Hafley, Robert A.; Hales, Stephen J.
1991-01-01
State-of-the-art Resistance Spot Welding (RSW) equipment has the potential to permit realtime monitoring of operations through advances in computerized process control. In order to realize adaptive feedback capabilities, it is necessary to establish correlations among process variables, welder outputs, and weldment properties. The initial step toward achieving this goal must involve assessment of the effect of specific process inputs and the interactions among these variables on spot weld characteristics. This investigation evaluated these effects through the application of a statistically designed experiment to the RSW process. A half-factorial, Taguchi L sub 16 design was used to understand and refine a RSW schedule developed for welding dissimilar aluminum-lithium alloys of different thickness. The baseline schedule had been established previously by traditional trial and error methods based on engineering judgment and one-factor-at-a-time studies. A hierarchy of inputs with respect to each other was established, and the significance of these inputs with respect to experimental noise was determined. Useful insight was gained into the effect of interactions among process variables, particularly with respect to weldment defects. The effects of equipment related changes associated with disassembly and recalibration were also identified. In spite of an apparent decrease in equipment performance, a significant improvement in the maximum strength for defect-free welds compared to the baseline schedule was achieved.
ERIC Educational Resources Information Center
Hogan, Lindsey C.; Bell, Matthew; Olson, Ryan
2009-01-01
The vigilance reinforcement hypothesis (VRH) asserts that errors in signal detection tasks are partially explained by operant reinforcement and extinction processes. VRH predictions were tested with a computerized baggage screening task. Our experiment evaluated the effects of signal schedule (extinction vs. variable interval 6 min) and visual…
Coordinated crew performance in commercial aircraft operations
NASA Technical Reports Server (NTRS)
Murphy, M. R.
1977-01-01
A specific methodology is proposed for an improved system of coding and analyzing crew member interaction. The complexity and lack of precision of many crew and task variables suggest the usefulness of fuzzy linguistic techniques for modeling and computer simulation of the crew performance process. Other research methodologies and concepts that have promise for increasing the effectiveness of research on crew performance are identified.
NASA Astrophysics Data System (ADS)
Biset, S.; Nieto Deglioumini, L.; Basualdo, M.; Garcia, V. M.; Serra, M.
The aim of this work is to investigate which would be a good preliminary plantwide control structure for the process of Hydrogen production from bioethanol to be used in a proton exchange membrane (PEM) accounting only steady-state information. The objective is to keep the process under optimal operation point, that is doing energy integration to achieve the maximum efficiency. Ethanol, produced from renewable feedstocks, feeds a fuel processor investigated for steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying steady-state simulation techniques and using thermodynamic models the performance of the complete system with two different control structures have been evaluated for the most typical perturbations. A sensitivity analysis for the key process variables together with the rigorous operability requirements for the fuel cell are taking into account for defining acceptable plantwide control structure. This is the first work showing an alternative control structure applied to this kind of process.
Moazzami, Zeinab; Dehdari, Tahere; Taghdisi, Mohammad Hosein; Soltanian, Alireza
2016-01-01
Background: One of the preventive strategies for chronic low back pain among operating room nurses is instructing proper body mechanics and postural behavior, for which the use of the Transtheoretical Model (TTM) has been recommended. Methods: Eighty two nurses who were in the contemplation and preparation stages for adopting correct body posture were randomly selected (control group = 40, intervention group = 42). TTM variables and body posture were measured at baseline and again after 1 and 6 months after the intervention. A four-week ergonomics educational intervention based on TTM variables was designed and conducted for the nurses in the intervention group. Results: Following the intervention, a higher proportion of nurses in the intervention group moved into the action stage (p < 0.05). Mean scores of self-efficacy, pros, experimental processes and correct body posture were also significantly higher in the intervention group (p < 0.05). No significant differences were found in the cons and behavioral processes, except for self-liberation, between the two groups (p > 0.05) after the intervention. Conclusions: The TTM provides a suitable framework for developing stage-based ergonomics interventions for postural behavior. PMID:26925897
Moazzami, Zeinab; Dehdari, Tahere; Taghdisi, Mohammad Hosein; Soltanian, Alireza
2015-11-03
One of the preventive strategies for chronic low back pain among operating room nurses is instructing proper body mechanics and postural behavior, for which the use of the Transtheoretical Model (TTM) has been recommended. Eighty two nurses who were in the contemplation and preparation stages for adopting correct body posture were randomly selected (control group = 40, intervention group = 42). TTM variables and body posture were measured at baseline and again after 1 and 6 months after the intervention. A four-week ergonomics educational intervention based on TTM variables was designed and conducted for the nurses in the intervention group. Following the intervention, a higher proportion of nurses in the intervention group moved into the action stage (p < 0.05). Mean scores of self-efficacy, pros, experimental processes and correct body posture were also significantly higher in the intervention group (p < 0.05). No significant differences were found in the cons and behavioral processes, except for self-liberation, between the two groups (p > 0.05) after the intervention. The TTM provides a suitable framework for developing stage-based ergonomics interventions for postural behavior.
NASA Astrophysics Data System (ADS)
Surace, J.; Laher, R.; Masci, F.; Grillmair, C.; Helou, G.
2015-09-01
The Palomar Transient Factory (PTF) is a synoptic sky survey in operation since 2009. PTF utilizes a 7.1 square degree camera on the Palomar 48-inch Schmidt telescope to survey the sky primarily at a single wavelength (R-band) at a rate of 1000-3000 square degrees a night. The data are used to detect and study transient and moving objects such as gamma ray bursts, supernovae and asteroids, as well as variable phenomena such as quasars and Galactic stars. The data processing system at IPAC handles realtime processing and detection of transients, solar system object processing, high photometric precision processing and light curve generation, and long-term archiving and curation. This was developed under an extremely limited budget profile in an unusually agile development environment. Here we discuss the mechanics of this system and our overall development approach. Although a significant scientific installation in of itself, PTF also serves as the prototype for our next generation project, the Zwicky Transient Facility (ZTF). Beginning operations in 2017, ZTF will feature a 50 square degree camera which will enable scanning of the entire northern visible sky every night. ZTF in turn will serve as a stepping stone to the Large Synoptic Survey Telescope (LSST), a major NSF facility scheduled to begin operations in the early 2020s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, J.; Mowrey, J.
1995-12-01
This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU systemmore » was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants.« less
A nonlinear cointegration approach with applications to structural health monitoring
NASA Astrophysics Data System (ADS)
Shi, H.; Worden, K.; Cross, E. J.
2016-09-01
One major obstacle to the implementation of structural health monitoring (SHM) is the effect of operational and environmental variabilities, which may corrupt the signal of structural degradation. Recently, an approach inspired from the community of econometrics, called cointegration, has been employed to eliminate the adverse influence from operational and environmental changes and still maintain sensitivity to structural damage. However, the linear nature of cointegration may limit its application when confronting nonlinear relations between system responses. This paper proposes a nonlinear cointegration method based on Gaussian process regression (GPR); the method is constructed under the Engle-Granger framework, and tests for unit root processes are conducted both before and after the GPR is applied. The proposed approach is examined with real engineering data from the monitoring of the Z24 Bridge.
Sharma, Praveen; Singh, Lakhvinder; Dilbaghi, Neeraj
2009-05-30
Decolorization of textile azo dye Disperse Yellow 211 (DY 211) was carried out from simulated aqueous solution by bacterial strain Bacillus subtilis. Response surface methodology (RSM), involving Box-Behnken design matrix in three most important operating variables; temperature, pH and initial dye concentration was successfully employed for the study and optimization of decolorization process. The total 17 experiments were conducted in the study towards the construction of a quadratic model. According to analysis of variance (ANOVA) results, the proposed model can be used to navigate the design space. Under optimized conditions the bacterial strain was able to decolorize DY 211 up to 80%. Model indicated that initial dye concentration of 100 mgl(-1), pH 7 and a temperature of 32.5 degrees C were found optimum for maximum % decolorization. Very high regression coefficient between the variables and the response (R(2)=0.9930) indicated excellent evaluation of experimental data by polynomial regression model. The combination of the three variables predicted through RSM was confirmed through confirmatory experiments, hence the bacterial strain holds a great potential for the treatment of colored textile effluents.
Treatment of winery wastewater by physicochemical, biological and advanced processes: a review.
Ioannou, L A; Li Puma, G; Fatta-Kassinos, D
2015-04-09
Winery wastewater is a major waste stream resulting from numerous cleaning operations that occur during the production stages of wine. The resulting effluent contains various organic and inorganic contaminants and its environmental impact is notable, mainly due to its high organic/inorganic load, the large volumes produced and its seasonal variability. Several processes for the treatment of winery wastewater are currently available, but the development of alternative treatment methods is necessary in order to (i) maximize the efficiency and flexibility of the treatment process to meet the discharge requirements for winery effluents, and (ii) decrease both the environmental footprint, as well as the investment/operational costs of the process. This review, presents the state-of-the-art of the processes currently applied and/or tested for the treatment of winery wastewater, which were divided into five categories: i.e., physicochemical, biological, membrane filtration and separation, advanced oxidation processes, and combined biological and advanced oxidation processes. The advantages and disadvantages, as well as the main parameters/factors affecting the efficiency of winery wastewater treatment are discussed. Both bench- and pilot/industrial-scale processes have been considered for this review. Copyright © 2014 Elsevier B.V. All rights reserved.
Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O
2017-08-01
Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, Nathalie; Nash, Ken; Martin, Leigh
In response to the NEUP Program Supporting Fuel Cycle R&D Separations and Waste Forms call DEFOA- 0000799, this report describes the results of an R&D project focusing on streamlining separation processes for advanced fuel cycles. An example of such a process relevant to the U.S. DOE FCR&D program would be one combining the functions of the TRUEX process for partitioning of lanthanides and minor actinides from PUREX(UREX) raffinates with that of the TALSPEAK process for separating transplutonium actinides from fission product lanthanides. A fully-developed PUREX(UREX)/TRUEX/TALSPEAK suite would generate actinides as product(s) for reuse (or transmutation) and fission products as waste.more » As standalone, consecutive unit-operations, TRUEX and TALSPEAK employ different extractant solutions (solvating (CMPO, octyl(phenyl)-N,Ndiisobutylcarbamoylmethylphosphine oxide) vs. cation exchanging (HDEHP, di-2(ethyl)hexylphosphoric acid) extractants), and distinct aqueous phases (2-4 M HNO 3 vs. concentrated pH 3.5 carboxylic acid buffers containing actinide selective chelating agents). The separate processes may also operate with different phase transfer kinetic constraints. Experience teaches (and it has been demonstrated at the lab scale) that, with proper control, multiple process separation systems can operate successfully. However, it is also recognized that considerable economies of scale could be achieved if multiple operations could be merged into a single process based on a combined extractant solvent. The task of accountability of nuclear materials through the process(es) also becomes more robust with fewer steps, providing that the processes can be accurately modeled. Work is underway in the U.S. and Europe on developing several new options for combined processes (TRUSPEAK, ALSEP, SANEX, GANEX, ExAm are examples). There are unique challenges associated with the operation of such processes, some relating to organic phase chemistry, others arising from the variable composition of the aqueous medium. This project targets in particular two problematic issues in designing combined process systems: managing the chemistry of challenging aqueous species (like Zr 4+) and optimizing the composition and properties of combined extractant organic phases.« less
Landsat-8 TIRS thermal radiometric calibration status
Barsi, Julia A.; Markham, Brian L.; Montanaro, Matthew; Gerace, Aaron; Hook, Simon; Schott, John R.; Raqueno, Nina G.; Morfitt, Ron
2017-01-01
The Thermal Infrared Sensor (TIRS) instrument is the thermal-band imager on the Landsat-8 platform. The initial onorbit calibration estimates of the two TIRS spectral bands indicated large average radiometric calibration errors, -0.29 and -0.51 W/m2 sr μm or -2.1K and -4.4K at 300K in Bands 10 and 11, respectively, as well as high variability in the errors, 0.87K and 1.67K (1-σ), respectively. The average error was corrected in operational processing in January 2014, though, this adjustment did not improve the variability. The source of the variability was determined to be stray light from far outside the field of view of the telescope. An algorithm for modeling the stray light effect was developed and implemented in the Landsat-8 processing system in February 2017. The new process has improved the overall calibration of the two TIRS bands, reducing the residual variability in the calibration from 0.87K to 0.51K at 300K for Band 10 and from 1.67K to 0.84K at 300K for Band 11. There are residual average lifetime bias errors in each band: 0.04 W/m2 sr μm (0.30K) and -0.04 W/m2 sr μm (-0.29K), for Bands 10 and 11, respectively.
Driscoll, Jessica; Hay, Lauren E.; Bock, Andrew R.
2017-01-01
Assessment of water resources at a national scale is critical for understanding their vulnerability to future change in policy and climate. Representation of the spatiotemporal variability in snowmelt processes in continental-scale hydrologic models is critical for assessment of water resource response to continued climate change. Continental-extent hydrologic models such as the U.S. Geological Survey National Hydrologic Model (NHM) represent snowmelt processes through the application of snow depletion curves (SDCs). SDCs relate normalized snow water equivalent (SWE) to normalized snow covered area (SCA) over a snowmelt season for a given modeling unit. SDCs were derived using output from the operational Snow Data Assimilation System (SNODAS) snow model as daily 1-km gridded SWE over the conterminous United States. Daily SNODAS output were aggregated to a predefined watershed-scale geospatial fabric and used to also calculate SCA from October 1, 2004 to September 30, 2013. The spatiotemporal variability in SNODAS output at the watershed scale was evaluated through the spatial distribution of the median and standard deviation for the time period. Representative SDCs for each watershed-scale modeling unit over the conterminous United States (n = 54,104) were selected using a consistent methodology and used to create categories of snowmelt based on SDC shape. The relation of SDC categories to the topographic and climatic variables allow for national-scale categorization of snowmelt processes.
Satellite Analysis of Ocean Biogeochemistry and Mesoscale Variability in the Sargasso Sea
NASA Technical Reports Server (NTRS)
Siegel, D. A.; Micheals, A. F.; Nelson, N. B.
1997-01-01
The objective of this study was to analyze the impact of spatial variability on the time-series of biogeochemical measurements made at the U.S. JGOFS Bermuda Atlantic Time-series Study (BATS) site. Originally the study was planned to use SeaWiFS as well as AVHRR high-resolution data. Despite the SeaWiFS delays we were able to make progress on the following fronts: (1) Operational acquisition, processing, and archive of HRPT data from a ground station located in Bermuda; (2) Validation of AVHRR SST data using BATS time-series and spatial validation cruise CTD data; (3) Use of AVHRR sea surface temperature imagery and ancillary data to assess the impact of mesoscale spatial variability on P(CO2) and carbon flux in the Sargasso Sea; (4) Spatial and temporal extent of tropical cyclone induced surface modifications; and (5) Assessment of eddy variability using TOPEX/Poseidon data.
Experimental demonstration of entanglement-assisted coding using a two-mode squeezed vacuum state
NASA Astrophysics Data System (ADS)
Mizuno, Jun; Wakui, Kentaro; Furusawa, Akira; Sasaki, Masahide
2005-01-01
We have experimentally realized the scheme initially proposed as quantum dense coding with continuous variables [
Using a 3D profiler and infrared camera to monitor oven loading in fully cooked meat operations
NASA Astrophysics Data System (ADS)
Stewart, John; Giorges, Aklilu
2009-05-01
Ensuring meat is fully cooked is an important food safety issue for operations that produce "ready to eat" products. In order to kill harmful pathogens like Salmonella, all of the product must reach a minimum threshold temperature. Producers typically overcook the majority of the product to ensure meat in the most difficult scenario reaches the desired temperature. A difficult scenario can be caused by an especially thick piece of meat or by a surge of product into the process. Overcooking wastes energy, degrades product quality, lowers the maximum throughput rate of the production line and decreases product yield. At typical production rates of 6000lbs/hour, these losses from overcooking can have a significant cost impact on producers. A wide area 3D camera coupled with a thermal camera was used to measure the thermal mass variability of chicken breasts in a cooking process. Several types of variability are considered including time varying thermal mass (mass x temperature / time), variation in individual product geometry and variation in product temperature. The automatic identification of product arrangement issues that affect cooking such as overlapping product and folded products is also addressed. A thermal model is used along with individual product geometry and oven cook profiles to predict the percentage of product that will be overcooked and to identify products that may not fully cook in a given process.
Does cost-benefit analysis or self-control predict involvement in two forms of aggression?
Archer, John; Fernández-Fuertes, Andrés A; Thanzami, Van Lal
2010-01-01
The main aim of this research was to assess the relative association between physical aggression and (1) self-control and (2) cost-benefit assessment, these variables representing the operation of impulsive and reflective processes. Study 1 involved direct and indirect aggression among young Indian men, and Study 2 physical aggression to dating partners among Spanish adolescents. In Study 1, perceived benefits and costs but not self-control were associated with direct aggression at other men, and the association remained when their close association with indirect aggression was controlled. In Study 2, benefits and self-control showed significant and independent associations (positive for benefits, negative for self-control) with physical aggression at other-sex partners. Although being victimized was also correlated in the same direction with self-control and benefits, perpetration and being victimized were highly correlated, and there was no association between being victimized and these variables when perpetration was controlled. These results support the theory that reflective (cost-benefit analyses) processes and impulsive (self-control) processes operate in parallel in affecting aggression. The finding that male adolescents perceived more costs and fewer benefits from physical aggression to a partner than female adolescents did is consistent with findings indicating greater social disapproval of men hitting women than vice versa, rather than with the view that male violence to women is facilitated by internalized patriarchal values. (c) 2010 Wiley-Liss, Inc.
Learning Multisensory Integration and Coordinate Transformation via Density Estimation
Sabes, Philip N.
2013-01-01
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588
Skeletal muscle repair in a mouse model of nemaline myopathy
Sanoudou, Despina; Corbett, Mark A.; Han, Mei; Ghoddusi, Majid; Nguyen, Mai-Anh T.; Vlahovich, Nicole; Hardeman, Edna C.; Beggs, Alan H.
2012-01-01
Nemaline myopathy (NM), the most common non-dystrophic congenital myopathy, is a variably severe neuromuscular disorder for which no effective treatment is available. Although a number of genes have been identified in which mutations can cause NM, the pathogenetic mechanisms leading to the phenotypes are poorly understood. To address this question, we examined gene expression patterns in an NM mouse model carrying the human Met9Arg mutation of alpha-tropomyosin slow (Tpm3). We assessed five different skeletal muscles from affected mice, which are representative of muscles with differing fiber-type compositions, different physiological specializations and variable degrees of pathology. Although these same muscles in non-affected mice showed marked variation in patterns of gene expression, with diaphragm being the most dissimilar, the presence of the mutant protein in nemaline muscles resulted in a more similar pattern of gene expression among the muscles. This result suggests a common process or mechanism operating in nemaline muscles independent of the variable degrees of pathology. Transcriptional and protein expression data indicate the presence of a repair process and possibly delayed maturation in nemaline muscles. Markers indicative of satellite cell number, activated satellite cells and immature fibers including M-Cadherin, MyoD, desmin, Pax7 and Myf6 were elevated by western-blot analysis or immunohistochemistry. Evidence suggesting elevated focal repair was observed in nemaline muscle in electron micrographs. This analysis reveals that NM is characterized by a novel repair feature operating in multiple different muscles. PMID:16877500
Skeletal muscle repair in a mouse model of nemaline myopathy.
Sanoudou, Despina; Corbett, Mark A; Han, Mei; Ghoddusi, Majid; Nguyen, Mai-Anh T; Vlahovich, Nicole; Hardeman, Edna C; Beggs, Alan H
2006-09-01
Nemaline myopathy (NM), the most common non-dystrophic congenital myopathy, is a variably severe neuromuscular disorder for which no effective treatment is available. Although a number of genes have been identified in which mutations can cause NM, the pathogenetic mechanisms leading to the phenotypes are poorly understood. To address this question, we examined gene expression patterns in an NM mouse model carrying the human Met9Arg mutation of alpha-tropomyosin slow (Tpm3). We assessed five different skeletal muscles from affected mice, which are representative of muscles with differing fiber-type compositions, different physiological specializations and variable degrees of pathology. Although these same muscles in non-affected mice showed marked variation in patterns of gene expression, with diaphragm being the most dissimilar, the presence of the mutant protein in nemaline muscles resulted in a more similar pattern of gene expression among the muscles. This result suggests a common process or mechanism operating in nemaline muscles independent of the variable degrees of pathology. Transcriptional and protein expression data indicate the presence of a repair process and possibly delayed maturation in nemaline muscles. Markers indicative of satellite cell number, activated satellite cells and immature fibers including M-Cadherin, MyoD, desmin, Pax7 and Myf6 were elevated by western-blot analysis or immunohistochemistry. Evidence suggesting elevated focal repair was observed in nemaline muscle in electron micrographs. This analysis reveals that NM is characterized by a novel repair feature operating in multiple different muscles.
The Impact of Pictorial Display on Operator Learning and Performance. M.S. Thesis
NASA Technical Reports Server (NTRS)
Miller, R. A.; Messing, L. J.; Jagacinski, R. J.
1984-01-01
The effects of pictorially displayed information on human learning and performance of a simple control task were investigated. The controlled system was a harmonic oscillator and the system response was displayed to subjects as either an animated pendulum or a horizontally moving dot. Results indicated that the pendulum display did not effect performance scores but did significantly effect the learning processes of individual operators. The subjects with the pendulum display demonstrated more vertical internal models early in the experiment and the manner in which their internal models were tuned with practice showed increased variability between subjects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandborn, R.H.
1976-01-01
M0200, a computer simulation model, was used to investigate the safeguarding of plutonium dioxide. The computer program operating the model was constructed so that replicate runs could provide data for statistical analysis of the distributions of the randomized variables. The plant model was divided into material balance areas associated with definable unit processes. Indicators of plant operations studied were modified end-of-shift material balances, end-of-blend errors formed by closing material balances between blends, and cumulative sums of the differences between actual and expected performances. (auth)
Mahdavi, Mahdi; Vissers, Jan; Elkhuizen, Sylvia; van Dijk, Mattees; Vanhala, Antero; Karampli, Eleftheria; Faubel, Raquel; Forte, Paul; Coroian, Elena; van de Klundert, Joris
2018-01-01
While health service provisioning for the chronic condition Type 2 Diabetes (T2D) often involves a network of organisations and professionals, most evidence on the relationships between the structures and processes of service provisioning and the outcomes considers single organisations or solo practitioners. Extending Donabedian's Structure-Process-Outcome (SPO) model, we investigate how differences in quality of life, effective coverage of diabetes, and service satisfaction are associated with differences in the structures, processes, and context of T2D services in six regions in Finland, Germany, Greece, Netherlands, Spain, and UK. Data collection consisted of: a) systematic modelling of provider network's structures and processes, and b) a cross-sectional survey of patient reported outcomes and other information. The survey resulted in data from 1459 T2D patients, during 2011-2012. Stepwise linear regression models were used to identify how independent cumulative proportion of variance in quality of life and service satisfaction are related to differences in context, structure and process. The selected context, structure and process variables are based on Donabedian's SPO model, a service quality research instrument (SERVQUAL), and previous organization and professional level evidence. Additional analysis deepens the possible bidirectional relation between outcomes and processes. The regression models explain 44% of variance in service satisfaction, mostly by structure and process variables (such as human resource use and the SERVQUAL dimensions). The models explained 23% of variance in quality of life between the networks, much of which is related to contextual variables. Our results suggest that effectiveness of A1c control is negatively correlated with process variables such as total hours of care provided per year and cost of services per year. While the selected structure and process variables explain much of the variance in service satisfaction, this is less the case for quality of life. Moreover, it appears that the effect of the clinical outcome A1c control on processes is stronger than the other way around, as poorer control seems to relate to more service use, and higher cost. The standardized operational models used in this research prove to form a basis for expanding the network level evidence base for effective T2D service provisioning.
Elkhuizen, Sylvia; van Dijk, Mattees; Vanhala, Antero; Karampli, Eleftheria; Faubel, Raquel; Forte, Paul; Coroian, Elena
2018-01-01
Background While health service provisioning for the chronic condition Type 2 Diabetes (T2D) often involves a network of organisations and professionals, most evidence on the relationships between the structures and processes of service provisioning and the outcomes considers single organisations or solo practitioners. Extending Donabedian’s Structure-Process-Outcome (SPO) model, we investigate how differences in quality of life, effective coverage of diabetes, and service satisfaction are associated with differences in the structures, processes, and context of T2D services in six regions in Finland, Germany, Greece, Netherlands, Spain, and UK. Methods Data collection consisted of: a) systematic modelling of provider network’s structures and processes, and b) a cross-sectional survey of patient reported outcomes and other information. The survey resulted in data from 1459 T2D patients, during 2011–2012. Stepwise linear regression models were used to identify how independent cumulative proportion of variance in quality of life and service satisfaction are related to differences in context, structure and process. The selected context, structure and process variables are based on Donabedian’s SPO model, a service quality research instrument (SERVQUAL), and previous organization and professional level evidence. Additional analysis deepens the possible bidirectional relation between outcomes and processes. Results The regression models explain 44% of variance in service satisfaction, mostly by structure and process variables (such as human resource use and the SERVQUAL dimensions). The models explained 23% of variance in quality of life between the networks, much of which is related to contextual variables. Our results suggest that effectiveness of A1c control is negatively correlated with process variables such as total hours of care provided per year and cost of services per year. Conclusions While the selected structure and process variables explain much of the variance in service satisfaction, this is less the case for quality of life. Moreover, it appears that the effect of the clinical outcome A1c control on processes is stronger than the other way around, as poorer control seems to relate to more service use, and higher cost. The standardized operational models used in this research prove to form a basis for expanding the network level evidence base for effective T2D service provisioning. PMID:29447220
Operational Assessment of Apollo Lunar Surface Extravehicular Activity
NASA Technical Reports Server (NTRS)
Miller, Matthew James; Claybrook, Austin; Greenlund, Suraj; Marquez, Jessica J.; Feigh, Karen M.
2017-01-01
Quantifying the operational variability of extravehicular activity (EVA) execution is critical to help design and build future support systems to enable astronauts to monitor and manage operations in deep-space, where ground support operators will no longer be able to react instantly and manage execution deviations due to the significant communication latency. This study quantifies the operational variability exhibited during Apollo 14-17 lunar surface EVA operations to better understand the challenges and natural tendencies of timeline execution and life support system performance involved in surface operations. Each EVA (11 in total) is individually summarized as well as aggregated to provide descriptive trends exhibited throughout the Apollo missions. This work extends previous EVA task analyses by calculating deviations between planned and as-performed timelines as well as examining metabolic rate and consumables usage throughout the execution of each EVA. The intent of this work is to convey the natural variability of EVA operations and to provide operational context for coping with the variability inherent to EVA execution as a means to support future concepts of operations.
3D facial landmarks: Inter-operator variability of manual annotation
2014-01-01
Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436
Ochando-Pulido, J M; Hodaifa, G; Victor-Ortega, M D; Rodriguez-Vives, S; Martinez-Ferez, A
2013-12-15
Production of olive oil results in the generation of high amounts of heavy polluted effluents characterized by extremely variable contaminants degree, leading to sensible complexity in treatment. In this work, batch membrane processes in series comprising ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO) are used to purify the effluents exiting both the two-phase and tree-phase extraction processes to a grade compatible to the discharge in municipal sewer systems in Spain and Italy. However, one main problem in applying this technology to wastewater management issues is given by membrane fouling. In the last years, the threshold flux theory was introduced as a key tool to understand fouling problems, and threshold flux measurement can give valuable information regarding optimal membrane process design and operation. In the present manuscript, mathematical approach of threshold flux conditions for membranes operation is addressed, also implementing proper pretreatment processes such as pH-T flocculation and UV/TiO2 photocatalysis with ferromagnetic-core nanoparticles in order to reduce membranes fouling. Both influence the organic matter content as well as the particle size distribution of the solutes surviving in the wastewater stream, leading, when properly applied, to reduced fouling, higher rejection and recovery values, thus enhancing the economic feasibility of the process. Copyright © 2013 Elsevier B.V. All rights reserved.
Real-Time Variable Rate Spraying in Orchards and Vineyards: A Review
NASA Astrophysics Data System (ADS)
Wandkar, Sachin Vilas; Bhatt, Yogesh Chandra; Jain, H. K.; Nalawade, Sachin M.; Pawar, Shashikant G.
2018-06-01
Effective and efficient use of pesticides in the orchards is of concern since many years. With the conventional constant rate sprayers, equal dose of pesticide is applied to each tree. Since, there is great variation in size and shape of each tree in the orchard, trees gets either oversprayed or undersprayed. Real-time variable rate spraying technology offers pesticide application in accordance with tree size. With the help of suitable sensors, tree characteristics such as canopy volume, foliage density, etc. can be acquired and with the micro-processing unit coupled with proper algorithm, flow of electronic proportional valves can be controlled thus, controlling the flow rate of nozzles according to tree characteristics. Also, sensors can help in the detection of spaces in-between trees which allows to control the spray in spaces. Variable rate spraying helps in achieving precision in spraying operation especially inside orchards. This paper reviews the real-time variable rate spraying technology and efforts made by the various researchers for real-time variable application in the orchards and vineyards.
Real-Time Variable Rate Spraying in Orchards and Vineyards: A Review
NASA Astrophysics Data System (ADS)
Wandkar, Sachin Vilas; Bhatt, Yogesh Chandra; Jain, H. K.; Nalawade, Sachin M.; Pawar, Shashikant G.
2018-02-01
Effective and efficient use of pesticides in the orchards is of concern since many years. With the conventional constant rate sprayers, equal dose of pesticide is applied to each tree. Since, there is great variation in size and shape of each tree in the orchard, trees gets either oversprayed or undersprayed. Real-time variable rate spraying technology offers pesticide application in accordance with tree size. With the help of suitable sensors, tree characteristics such as canopy volume, foliage density, etc. can be acquired and with the micro-processing unit coupled with proper algorithm, flow of electronic proportional valves can be controlled thus, controlling the flow rate of nozzles according to tree characteristics. Also, sensors can help in the detection of spaces in-between trees which allows to control the spray in spaces. Variable rate spraying helps in achieving precision in spraying operation especially inside orchards. This paper reviews the real-time variable rate spraying technology and efforts made by the various researchers for real-time variable application in the orchards and vineyards.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.
Deng, Li; Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP
Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740
Prediction of wastewater treatment plants performance based on artificial fish school neural network
NASA Astrophysics Data System (ADS)
Zhang, Ruicheng; Li, Chong
2011-10-01
A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.
Switching and optimizing control for coal flotation process based on a hybrid model
Dong, Zhiyong; Wang, Ranfeng; Fan, Minqiang; Fu, Xiang
2017-01-01
Flotation is an important part of coal preparation, and the flotation column is widely applied as efficient flotation equipment. This process is complex and affected by many factors, with the froth depth and reagent dosage being two of the most important and frequently manipulated variables. This paper proposes a new method of switching and optimizing control for the coal flotation process. A hybrid model is built and evaluated using industrial data. First, wavelet analysis and principal component analysis (PCA) are applied for signal pre-processing. Second, a control model for optimizing the set point of the froth depth is constructed based on fuzzy control, and a control model is designed to optimize the reagent dosages based on expert system. Finally, the least squares-support vector machine (LS-SVM) is used to identify the operating conditions of the flotation process and to select one of the two models (froth depth or reagent dosage) for subsequent operation according to the condition parameters. The hybrid model is developed and evaluated on an industrial coal flotation column and exhibits satisfactory performance. PMID:29040305
How Distinctive Processing Enhances Hits and Reduces False Alarms
Hunt, R. Reed; Smith, Rebekah E.
2015-01-01
Distinctive processing is a concept designed to account for precision in memory, both correct responses and avoidance of errors. The principal question addressed in two experiments is how distinctive processing of studied material reduces false alarms to familiar distractors. Jacoby (Jacoby, Kelley, & McElree, 1999) has used the metaphors early selection and late correction to describe two different types of control processes. Early selection refers to limitations on access whereas late correction describes controlled monitoring of accessed information. The two types of processes are not mutually exclusive, and previous research has provided evidence for the operation of both. The data reported here extend previous work to a criterial recollection paradigm and to a recognition memory test. The results of both experiments show that variables that reduce false memory for highly familiar distracters continue to exert their effect under conditions of minimal post-access monitoring. Level of monitoring was reduced in the first experiment through test instructions and in the second experiment through speeded test responding. The results were consistent with the conclusion that both early selection and late correction operate to control accuracy in memory. PMID:26034343
Understanding and Controlling Sialylation in a CHO Fc-Fusion Process
Lewis, Amanda M.; Croughan, William D.; Aranibar, Nelly; Lee, Alison G.; Warrack, Bethanne; Abu-Absi, Nicholas R.; Patel, Rutva; Drew, Barry; Borys, Michael C.; Reily, Michael D.; Li, Zheng Jian
2016-01-01
A Chinese hamster ovary (CHO) bioprocess, where the product is a sialylated Fc-fusion protein, was operated at pilot and manufacturing scale and significant variation of sialylation level was observed. In order to more tightly control glycosylation profiles, we sought to identify the cause of variability. Untargeted metabolomics and transcriptomics methods were applied to select samples from the large scale runs. Lower sialylation was correlated with elevated mannose levels, a shift in glucose metabolism, and increased oxidative stress response. Using a 5-L scale model operated with a reduced dissolved oxygen set point, we were able to reproduce the phenotypic profiles observed at manufacturing scale including lower sialylation, higher lactate and lower ammonia levels. Targeted transcriptomics and metabolomics confirmed that reduced oxygen levels resulted in increased mannose levels, a shift towards glycolysis, and increased oxidative stress response similar to the manufacturing scale. Finally, we propose a biological mechanism linking large scale operation and sialylation variation. Oxidative stress results from gas transfer limitations at large scale and the presence of oxygen dead-zones inducing upregulation of glycolysis and mannose biosynthesis, and downregulation of hexosamine biosynthesis and acetyl-CoA formation. The lower flux through the hexosamine pathway and reduced intracellular pools of acetyl-CoA led to reduced formation of N-acetylglucosamine and N-acetylneuraminic acid, both key building blocks of N-glycan structures. This study reports for the first time a link between oxidative stress and mammalian protein sialyation. In this study, process, analytical, metabolomic, and transcriptomic data at manufacturing, pilot, and laboratory scales were taken together to develop a systems level understanding of the process and identify oxygen limitation as the root cause of glycosylation variability. PMID:27310468
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2005-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2004-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Dynamic array processing for computationally intensive expert systems in CLIPS
NASA Technical Reports Server (NTRS)
Athavale, N. N.; Ragade, R. K.; Fenske, T. E.; Cassaro, M. A.
1990-01-01
This paper puts forth an architecture for implementing a loop for advanced data structure of arrays in CLIPS. An attempt is made to use multi-field variables in such an architecture to process a set of data during the decision making cycle. Also, current limitations on the expert system shells are discussed in brief in this paper. The resulting architecture is designed to circumvent the current limitations set by the expert system shell and also by the operating environment. Such advanced data structures are needed for tightly coupling symbolic and numeric computation modules.
NASA Goddard Space Flight Center Robotic Processing System Program Automation Systems, volume 2
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1991-01-01
Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form. Some of the areas covered include: (1) mission requirements; (2) automation management system; (3) Space Transportation System (STS) Hitchhicker Payload; (4) Spacecraft Command Language (SCL) scripts; (5) SCL software components; (6) RoMPS EasyLab Command & Variable summary for rack stations and annealer module; (7) support electronics assembly; (8) SCL uplink packet definition; (9) SC-4 EasyLab System Memory Map; (10) Servo Axis Control Logic Suppliers; and (11) annealing oven control subsystem.
Role of optical computers in aeronautical control applications
NASA Technical Reports Server (NTRS)
Baumbick, R. J.
1981-01-01
The role that optical computers play in aircraft control is determined. The optical computer has the potential high speed capability required, especially for matrix/matrix operations. The optical computer also has the potential for handling nonlinear simulations in real time. They are also more compatible with fiber optic signal transmission. Optics also permit the use of passive sensors to measure process variables. No electrical energy need be supplied to the sensor. Complex interfacing between optical sensors and the optical computer is avoided if the optical sensor outputs can be directly processed by the optical computer.
An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space andmore » time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.« less
NASA Astrophysics Data System (ADS)
Bardant, Teuku Beuna; Dahnum, Deliana; Amaliyah, Nur
2017-11-01
Simultaneous Saccharification Fermentation (SSF) of palm oil (Elaeis guineensis) empty fruit bunch (EFB) pulp were investigated as a part of ethanol production process. SSF was investigated by observing the effect of substrate loading variation in range 10-20%w, cellulase loading 5-30 FPU/gr substrate and yeast addition 1-2%v to the ethanol yield. Mathematical model for describing the effects of these three variables to the ethanol yield were developed using Response Surface Methodology-Cheminformatics (RSM-CI). The model gave acceptable accuracy in predicting ethanol yield for Simultaneous Saccharification and Fermentation (SSF) with coefficient of determination (R2) 0.8899. Model validation based on data from previous study gave (R2) 0.7942 which was acceptable for using this model for trend prediction analysis. Trend prediction analysis based on model prediction yield showed that SSF gave trend for higher yield when the process was operated in high enzyme concentration and low substrate concentration. On the other hand, even SHF model showed better yield will be obtained if operated in lower substrate concentration, it still possible to operate in higher substrate concentration with slightly lower yield. Opportunity provided by SHF to operate in high loading substrate make it preferable option for application in commercial scale.
Sliwinski, Martin J.; Almeida, David M.; Smyth, Joshua; Stawski, Robert S.
2010-01-01
There is little longitudinal information on aging-related changes in emotional responses to negative events. The present manuscript examined intraindividual change and variability in the within-person coupling of daily stress and negative affect (NA) using data from two-measurement burst daily diary studies. Three main findings emerged. First, average reactivity to daily stress increased longitudinally, and this increase was evident across most the adult lifespan. Second, individual differences in emotional reactivity to daily stress exhibited long-term temporal stability, but this stability was greatest in midlife and decreased in old age. And third, reactivity to daily stress varied reliably within-persons (across-time), with individual exhibiting higher levels of reactivity during times when reporting high levels of global subject stress in previous month. Taken together, the present results emphasize the importance of modeling dynamic psychosocial and aging processes that operate across different time scales for understanding age-related changes in daily stress processes. PMID:20025399
Spindler, A
2014-06-15
Although data reconciliation is intensely applied in process engineering, almost none of its powerful methods are employed for validation of operational data from wastewater treatment plants. This is partly due to some prerequisites that are difficult to meet including steady state, known variances of process variables and absence of gross errors. However, an algorithm can be derived from the classical approaches to data reconciliation that allows to find a comprehensive set of equations describing redundancy in the data when measured and unmeasured variables (flows and concentrations) are defined. This is a precondition for methods of data validation based on individual mass balances such as CUSUM charts. The procedure can also be applied to verify the necessity of existing or additional measurements with respect to the improvement of the data's redundancy. Results are given for a large wastewater treatment plant. The introduction aims at establishing a link between methods known from data reconciliation in process engineering and their application in wastewater treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zhu, Tong; Moussa, Ehab M; Witting, Madeleine; Zhou, Deliang; Sinha, Kushal; Hirth, Mario; Gastens, Martin; Shang, Sherwin; Nere, Nandkishor; Somashekar, Shubha Chetan; Alexeenko, Alina; Jameel, Feroz
2018-07-01
Scale-up and technology transfer of lyophilization processes remains a challenge that requires thorough characterization of the laboratory and larger scale lyophilizers. In this study, computational fluid dynamics (CFD) was employed to develop computer-based models of both laboratory and manufacturing scale lyophilizers in order to understand the differences in equipment performance arising from distinct designs. CFD coupled with steady state heat and mass transfer modeling of the vial were then utilized to study and predict independent variables such as shelf temperature and chamber pressure, and response variables such as product resistance, product temperature and primary drying time for a given formulation. The models were then verified experimentally for the different lyophilizers. Additionally, the models were applied to create and evaluate a design space for a lyophilized product in order to provide justification for the flexibility to operate within a certain range of process parameters without the need for validation. Published by Elsevier B.V.
An application of Six Sigma methodology to turnover intentions in health care.
Taner, Mehmet
2009-01-01
The purpose of this study is to show how the principles of Six Sigma can be applied to the high turnover problem of doctors in medical emergency services and paramedic backup. Six Sigma's define-measure-analyse-improve-control (DMAIC) is applied for reducing the turnover rate of doctors in an organisation operating in emergency services. Variables of the model are determined. Explanatory factor analysis, multiple regression, analysis of variance (ANOVA) and Gage R&R are employed for the analysis. Personal burnout/stress and dissatisfaction from salary were found to be the "vital few" variables. The organisation took a new approach by improving its initiatives to doctors' working conditions. Sigma level of the process is increased. New policy and process changes have been found to effectively decrease the incidence of turnover intentions. The improved process is gained, standardised and institutionalised. This study is one of the few papers in the literature that elaborates the turnover problem of doctors working in the emergency and paramedic backup services.
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
Cascade process modeling with mechanism-based hierarchical neural networks.
Cong, Qiumei; Yu, Wen; Chai, Tianyou
2010-02-01
Cascade process, such as wastewater treatment plant, includes many nonlinear sub-systems and many variables. When the number of sub-systems is big, the input-output relation in the first block and the last block cannot represent the whole process. In this paper we use two techniques to overcome the above problem. Firstly we propose a new neural model: hierarchical neural networks to identify the cascade process; then we use serial structural mechanism model based on the physical equations to connect with neural model. A stable learning algorithm and theoretical analysis are given. Finally, this method is used to model a wastewater treatment plant. Real operational data of wastewater treatment plant is applied to illustrate the modeling approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amoroso, J.; Peeler, D.; Edwards, T.
2012-05-11
A recommendation to eliminate all characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification sample was made by a Six-Sigma team chartered to eliminate non-value-added activities for the Defense Waste Processing Facility (DWPF) sludge batch qualification program and is documented in the report SS-PIP-2006-00030. That recommendation was supported through a technical data review by the Savannah River National Laboratory (SRNL) and is documented in the memorandums SRNL-PSE-2007-00079 and SRNL-PSE-2007-00080. At the time of writing those memorandums, the DWPF was processing sludge-only waste but, has since transitioned to a coupledmore » operation (sludge and salt). The SRNL was recently tasked to perform a similar data review relevant to coupled operations and re-evaluate the previous recommendations. This report evaluates the validity of eliminating the characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification samples based on sludge-only and coupled operations. The pour stream sample has confirmed the DWPF's ability to produce an acceptable waste form from Slurry Mix Evaporator (SME) blending and product composition/durability predictions for the previous sixteen years but, ultimately the pour stream analysis has added minimal value to the DWPF's waste qualification strategy. Similarly, the information gained from the glass fabrication and PCT of the sludge batch qualification sample was determined to add minimal value to the waste qualification strategy since that sample is routinely not representative of the waste composition ultimately processed at the DWPF due to blending and salt processing considerations. Moreover, the qualification process has repeatedly confirmed minimal differences in glass behavior from actual radioactive waste to glasses fabricated from simulants or batch chemicals. In contrast, the variability study has significantly added value to the DWPF's qualification strategy. The variability study has evolved to become the primary aspect of the DWPF's compliance strategy as it has been shown to be versatile and capable of adapting to the DWPF's various and diverse waste streams and blending strategies. The variability study, which aims to ensure durability requirements and the PCT and chemical composition correlations are valid for the compositional region to be processed at the DWPF, must continue to be performed. Due to the importance of the variability study and its place in the DWPF's qualification strategy, it will also be discussed in this report. An analysis of historical data and Production Records indicated that the recommendation of the Six Sigma team to eliminate all characterization of pour stream glass samples and the glass fabrication and PCT performed with the qualification glass does not compromise the DWPF's current compliance plan. Furthermore, the DWPF should continue to produce an acceptable waste form following the remaining elements of the Glass Product Control Program; regardless of a sludge-only or coupled operations strategy. If the DWPF does decide to eliminate the characterization of pour stream samples, pour stream samples should continue to be collected for archival reasons, which would allow testing to be performed should any issues arise or new repository test methods be developed.« less
Guieysse, Benoit; Norvill, Zane N
2014-02-28
When direct wastewater biological treatment is unfeasible, a cost- and resource-efficient alternative to direct chemical treatment consists of combining biological treatment with a chemical pre-treatment aiming to convert the hazardous pollutants into more biodegradable compounds. Whereas the principles and advantages of sequential treatment have been demonstrated for a broad range of pollutants and process configurations, recent progresses (2011-present) in the field provide the basis for refining assessment of feasibility, costs, and environmental impacts. This paper thus reviews recent real wastewater demonstrations at pilot and full scale as well as new process configurations. It also discusses new insights on the potential impacts of microbial community dynamics on process feasibility, design and operation. Finally, it sheds light on a critical issue that has not yet been properly addressed in the field: integration requires complex and tailored optimization and, of paramount importance to full-scale application, is sensitive to uncertainty and variability in the inputs used for process design and operation. Future research is therefore critically needed to improve process control and better assess the real potential of sequential chemical-biological processes for industrial wastewater treatment. Copyright © 2013 Elsevier B.V. All rights reserved.
Distillation Column Flooding Predictor
DOE Office of Scientific and Technical Information (OSTI.GOV)
George E. Dzyacky
2010-11-23
The Flooding Predictor™ is a patented advanced control technology proven in research at the Separations Research Program, University of Texas at Austin, to increase distillation column throughput by over 6%, while also increasing energy efficiency by 10%. The research was conducted under a U. S. Department of Energy Cooperative Agreement awarded to George Dzyacky of 2ndpoint, LLC. The Flooding Predictor™ works by detecting the incipient flood point and controlling the column closer to its actual hydraulic limit than historical practices have allowed. Further, the technology uses existing column instrumentation, meaning no additional refining infrastructure is required. Refiners often push distillationmore » columns to maximize throughput, improve separation, or simply to achieve day-to-day optimization. Attempting to achieve such operating objectives is a tricky undertaking that can result in flooding. Operators and advanced control strategies alike rely on the conventional use of delta-pressure instrumentation to approximate the column’s approach to flood. But column delta-pressure is more an inference of the column’s approach to flood than it is an actual measurement of it. As a consequence, delta pressure limits are established conservatively in order to operate in a regime where the column is never expected to flood. As a result, there is much “left on the table” when operating in such a regime, i.e. the capacity difference between controlling the column to an upper delta-pressure limit and controlling it to the actual hydraulic limit. The Flooding Predictor™, an innovative pattern recognition technology, controls columns at their actual hydraulic limit, which research shows leads to a throughput increase of over 6%. Controlling closer to the hydraulic limit also permits operation in a sweet spot of increased energy-efficiency. In this region of increased column loading, the Flooding Predictor is able to exploit the benefits of higher liquid/vapor traffic that produce increased contact area and lead to substantial increases in separation efficiency – which translates to a 10% increase in energy efficiency on a BTU/bbl basis. The Flooding Predictor™ operates on the principle that between five to sixty minutes in advance of a flooding event, certain column variables experience an oscillation, a pre-flood pattern. The pattern recognition system of the Flooding Predictor™ utilizes the mathematical first derivative of certain column variables to identify the column’s pre-flood pattern(s). This pattern is a very brief, highly repeatable, simultaneous movement among the derivative values of certain column variables. While all column variables experience negligible random noise generated from the natural frequency of the process, subtle pre-flood patterns are revealed among sub-sets of the derivative values of column variables as the column approaches its hydraulic limit. The sub-set of column variables that comprise the pre-flood pattern is identified empirically through in a two-step process. First, 2ndpoint’s proprietary off-line analysis tool is used to mine historical data for pre-flood patterns. Second, the column is flood-tested to fine-tune the pattern recognition for commissioning. Then the Flooding Predictor™ is implemented as closed-loop advanced control strategy on the plant’s distributed control system (DCS), thus automating control of the column at its hydraulic limit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pennock, Kenneth; Makarov, Yuri V.; Rajagopal, Sankaran
The need for proactive closed-loop integration of uncertainty information into system operations and probability-based controls is widely recognized, but rarely implemented in system operations. Proactive integration for this project means that the information concerning expected uncertainty ranges for net load and balancing requirements, including required balancing capacity, ramping and ramp duration characteristics, will be fed back into the generation commitment and dispatch algorithms to modify their performance so that potential shortages of these characteristics can be prevented. This basic, yet important, premise is the motivating factor for this project. The achieved project goal is to demonstrate the benefit of suchmore » a system. The project quantifies future uncertainties, predicts additional system balancing needs including the prediction intervals for capacity and ramping requirements of future dispatch intervals, evaluates the impacts of uncertainties on transmission including the risk of overloads and voltage problems, and explores opportunities for intra-hour generation adjustments helping to provide more flexibility for system operators. The resulting benefits culminate in more reliable grid operation in the face of increased system uncertainty and variability caused by solar power. The project identifies that solar power does not require special separate penetration level restrictions or penalization for its intermittency. Ultimately, the collective consideration of all sources of intermittency distributed over a wide area unified with the comprehensive evaluation of various elements of balancing process, i.e. capacity, ramping, and energy requirements, help system operators more robustly and effectively balance generation against load and interchange. This project showed that doing so can facilitate more solar and other renewable resources on the grid without compromising reliability and control performance. Efforts during the project included developing and integrating advanced probabilistic solar forecasts, including distributed PV forecasts, into closed –loop decision making processes. Additionally, new uncertainty quantifications methods and tools for the direct integration of uncertainty and variability information into grid operations at the transmission and distribution levels were developed and tested. During Phase 1, project work focused heavily on the design, development and demonstration of a set of processes and tools that could reliably and efficiently incorporate solar power into California’s grid operations. In Phase 2, connectivity between the ramping analysis tools and market applications software were completed, multiple dispatch scenarios demonstrated a successful reduction of overall uncertainty and an analysis to quantify increases in system operator reliability, and the transmission and distribution system uncertainty prediction tool was introduced to system operation engineers in a live webinar. The project met its goals, the experiments prove the advancements to methods and tools, when working together, are beneficial to not only the California Independent System Operator, but the benefits are transferable to other system operators in the United States.« less
Statistical modeling of SRAM yield performance and circuit variability
NASA Astrophysics Data System (ADS)
Cheng, Qi; Chen, Yijian
2015-03-01
In this paper, we develop statistical models to investigate SRAM yield performance and circuit variability in the presence of self-aligned multiple patterning (SAMP) process. It is assumed that SRAM fins are fabricated by a positivetone (spacer is line) self-aligned sextuple patterning (SASP) process which accommodates two types of spacers, while gates are fabricated by a more pitch-relaxed self-aligned quadruple patterning (SAQP) process which only allows one type of spacer. A number of possible inverter and SRAM structures are identified and the related circuit multi-modality is studied using the developed failure-probability and yield models. It is shown that SRAM circuit yield is significantly impacted by the multi-modality of fins' spatial variations in a SRAM cell. The sensitivity of 6-transistor SRAM read/write failure probability to SASP process variations is calculated and the specific circuit type with the highest probability to fail in the reading/writing operation is identified. Our study suggests that the 6-transistor SRAM configuration may not be scalable to 7-nm half pitch and more robust SRAM circuit design needs to be researched.
Extrusion-spheronization: process variables and characterization.
Sinha, V R; Agrawal, M K; Agarwal, A; Singh, G; Ghai, D
2009-01-01
Multiparticulate systems have undergone great development in the past decade fueled by the better understanding of their multiple roles as a suitable delivery system. With the passage of time, significant advances have been made in the process of pelletization due to the incorporation of specialized techniques for their development. Extrusion-spheronization seems to be the most promising process for the optimum delivery of many potent drugs having high systemic toxicity. It also offers immense pharmaceutical applicability due to the benefits of high loading capacity of active ingredient(s), narrow size distribution, and cost-effectiveness. On application of a specific coat, these systems can also aid in site-specific delivery, thereby enhancing the bioavailability of many drugs. The current review focuses on the process of extrusion-spheronization and the operational (extruder types, screen pressure, screw speed, temperature, moisture content, spheronization load, speed and time) and formulation (excipients and drugs) variables, which may affect the quality of the final pellets. Various methods for the evaluation of the quality of the pellets with regard to the size distribution, shape, friability, granule strength, density, porosity, flow properties, and surface texture are discussed.
Method and apparatus for executing an asynchronous clutch-to-clutch shift in a hybrid transmission
Demirovic, Besim; Gupta, Pinaki; Kaminsky, Lawrence A.; Naqvi, Ali K.; Heap, Anthony H.; Sah, Jy-Jen F.
2014-08-12
A hybrid transmission includes first and second electric machines. A method for operating the hybrid transmission in response to a command to execute a shift from an initial continuously variable mode to a target continuously variable mode includes increasing torque of an oncoming clutch associated with operating in the target continuously variable mode and correspondingly decreasing a torque of an off-going clutch associated with operating in the initial continuously variable mode. Upon deactivation of the off-going clutch, torque outputs of the first and second electric machines and the torque of the oncoming clutch are controlled to synchronize the oncoming clutch. Upon synchronization of the oncoming clutch, the torque for the oncoming clutch is increased and the transmission is operated in the target continuously variable mode.
NASA Astrophysics Data System (ADS)
Shadgan, Babak; Molavi, Behnam; Reid, W. D.; Dumont, Guy; Macnab, Andrew J.
2010-02-01
Background: Medical and diagnostic applications of near infrared spectroscopy (NIRS) are increasing, especially in operating rooms (OR). Since NIRS is an optical technique, radio frequency (RF) interference from other instruments is unlikely to affect the raw optical data, however, NIRS data processing and signal output could be affected. Methods: We investigated the potential for three common OR instruments: an electrical cautery, an orthopaedic drill and an imaging system, to generate electromagnetic interference (EMI) that could potentially influence NIRS signals. The time of onset and duration of every operation of each device was recorded during surgery. To remove the effects of slow changing physiological variables, we first used a lowpass filter and then selected 2 windows with variable lengths around the moment of device onset. For each instant, variances (energy) and means of the signals in the 2 windows were compared. Results: Twenty patients were studied during ankle surgery. Analysis shows no statistically significant difference in the means and variance of the NIRS signals (p < 0.01) during operation of any of the three devices for all surgeries. Conclusion: This method confirms the instruments evaluated caused no significant interference. NIRS can potentially be used without EMI in clinical environments such as the OR.
Thermal anomalies of the transmitter experiment package on the communications technology satellite
NASA Technical Reports Server (NTRS)
Alexovich, R. E.; Curren, A. N.
1979-01-01
The causes of four temporary thermal-control-system malfunctions that gave rise to unexpected temperature excursions in the 12-gigahertz, 200-watt transmitter experiment package (TEP) on the Communications Technology Satellite were investigated. The TEP consists of a nominal 200-watt output stage tube (OST), a supporting power-processing system (PPS), and a variable-conductance heat-pipe system (VCHPS). The VCHPS, which uses three heat pipes to conduct heat from the body of the OST to a radiator fin, was designed to maintain the TEP at safe operating temperatures at all operating conditions. On four occasions during 1977, all near the spring and fall equinoxes, the OST body temperature and related temperatures displayed sudden, rapid, and unexpected rises above normal levels while the TEP was operating at essentially constant, normal conditions. The temperature excursions were terminated without TEP damage by reducing the radio frequency (RF) output power of the OST. Between the anomalies and since the fourth, the thermal control system has apparently functioned as designed. The results indicate the most probable cause of the temperature anomalies is depriming of the arteries in the variable-conductance heat pipes. A mode was identified in which the TEP, as presently configured, may operate with stable temperatures and with minimum change in performance level.
An empirical comparison of key statistical attributes among potential ICU quality indicators.
Brown, Sydney E S; Ratcliffe, Sarah J; Halpern, Scott D
2014-08-01
Good quality indicators should have face validity, relevance to patients, and be able to be measured reliably. Beyond these general requirements, good quality indicators should also have certain statistical properties, including sufficient variability to identify poor performers, relative insensitivity to severity adjustment, and the ability to capture what providers do rather than patients' characteristics. We assessed the performance of candidate indicators of ICU quality on these criteria. Indicators included ICU readmission, mortality, several length of stay outcomes, and the processes of venous-thromboembolism and stress ulcer prophylaxis provision. Retrospective cohort study. One hundred thirty-eight U.S. ICUs from 2001-2008 in the Project IMPACT database. Two hundred sixty-eight thousand eight hundred twenty-four patients discharged from U.S. ICUs. None. We assessed indicators' (1) variability across ICU-years; (2) degree of influence by patient vs. ICU and hospital characteristics using the Omega statistic; (3) sensitivity to severity adjustment by comparing the area under the receiver operating characteristic curve (AUC) between models including vs. excluding patient variables, and (4) correlation between risk adjusted quality indicators using a Spearman correlation. Large ranges of among-ICU variability were noted for all quality indicators, particularly for prolonged length of stay (4.7-71.3%) and the proportion of patients discharged home (30.6-82.0%), and ICU and hospital characteristics outweighed patient characteristics for stress ulcer prophylaxis (ω, 0.43; 95% CI, 0.34-0.54), venous thromboembolism prophylaxis (ω, 0.57; 95% CI, 0.53-0.61), and ICU readmissions (ω, 0.69; 95% CI, 0.52-0.90). Mortality measures were the most sensitive to severity adjustment (area under the receiver operating characteristic curve % difference, 29.6%); process measures were the least sensitive (area under the receiver operating characteristic curve % differences: venous thromboembolism prophylaxis, 3.4%; stress ulcer prophylaxis, 2.1%). None of the 10 indicators was clearly and consistently correlated with a majority of the other nine indicators. No indicator performed optimally across assessments. Future research should seek to define and operationalize quality in a way that is relevant to both patients and providers.
NASA Astrophysics Data System (ADS)
Aprilia, Ayu Rizky; Santoso, Imam; Ekasari, Dhita Murita
2017-05-01
Yogurt is a product based on milk, which has beneficial effects for health. The process for the production of yogurt is very susceptible to failure because it involves bacteria and fermentation. For an industry, the risks may cause harm and have a negative impact. In order for a product to be successful and profitable, it requires the analysis of risks that may occur during the production process. Risk analysis can identify the risks in detail and prevent as well as determine its handling, so that the risks can be minimized. Therefore, this study will analyze the risks of the production process with a case study in CV.XYZ. The method used in this research is the Fuzzy Failure Mode and Effect Analysis (fuzzy FMEA) and Fault Tree Analysis (FTA). The results showed that there are 6 risks from equipment variables, raw material variables, and process variables. Those risks include the critical risk, which is the risk of a lack of an aseptic process, more specifically if starter yogurt is damaged due to contamination by fungus or other bacteria and a lack of sanitation equipment. The results of quantitative analysis of FTA showed that the highest probability is the probability of the lack of an aseptic process, with a risk of 3.902%. The recommendations for improvement include establishing SOPs (Standard Operating Procedures), which include the process, workers, and environment, controlling the starter of yogurt and improving the production planning and sanitation equipment using hot water immersion.
NASA Technical Reports Server (NTRS)
Atwater, James; Wheeler, Richard, Jr.; Akse, James; Jovanovic, Goran; Reed, Brian
2013-01-01
To support long-duration manned missions in space such as a permanent lunar base, Mars transit, or Mars Surface Mission, improved methods for the treatment of solid wastes, particularly methods that recover valuable resources, are needed. The ability to operate under microgravity and hypogravity conditions is essential to meet this objective. The utilization of magnetic forces to manipulate granular magnetic media has provided the means to treat solid wastes under variable gravity conditions by filtration using a consolidated magnetic media bed followed by thermal processing of the solid wastes in a fluidized bed reactor. Non-uniform magnetic fields will produce a magnetic field gradient in a bed of magnetically susceptible media toward the distributor plate of a fluidized bed reactor. A correctly oriented magnetic field gradient will generate a downward direct force on magnetic media that can substitute for gravitational force in microgravity, or which may augment low levels of gravity, such as on the Moon or Mars. This approach is termed Gradient Magnetically Assisted Fluidization (G-MAFB), in which the magnitude of the force on the fluidized media depends upon the intensity of the magnetic field (H), the intensity of the field gradient (dH/dz), and the magnetic susceptibility of the media. Fluidized beds based on the G-MAFB process can operate in any gravitational environment by tuning the magnetic field appropriately. Magnetic materials and methods have been developed that enable G-MAFB operation under variable gravity conditions.
Tafe, Laura J; Allen, Samantha F; Steinmetz, Heather B; Dokus, Betty A; Cook, Leanne J; Marotti, Jonathan D; Tsongalis, Gregory J
2014-08-01
HER2 fluorescence in-situ hybridization (FISH) is used in breast and gastro-esophageal carcinoma for determining HER2 gene amplification and patients' eligibility for HER2 targeted therapeutics. Traditional manual processing of the FISH slides is labor intensive because of multiple steps that require hands on manipulation of the slides and specifically timed intervals between steps. This highly manual processing also introduces inter-run and inter-operator variability that may affect the quality of the FISH result. Therefore, we sought to incorporate an automated processing instrument into our FISH workflow. Twenty-six cases including breast (20) and gastro-esophageal (6) cancer comprising 23 biopsies and three excision specimens were tested for HER2 FISH (Pathvysion, Abbott) using the Thermobrite Elite (TBE) system (Leica). Up to 12 slides can be run simultaneously. All cases were previously tested by the Pathvysion HER2 FISH assay with manual preparation. Twenty cells were counted by two observers for each case; five cases were tested on three separate runs by different operators to evaluate the precision and inter-operator variability. There was 100% concordance in the scoring between the manual and TBE methods as well as among the five cases that were tested on three runs. Only one case failed due to poor probe hybridization. In total, seven cases were positive for HER2 amplification (HER2:CEP17 ratio >2.2) and the remaining 19 were negative (HER2:CEP17 ratio <1.8) utilizing the 2007 ASCO/CAP scoring criteria. Due to the automated denaturation and hybridization, for each run, there was a reduction in labor of 3.5h which could then be dedicated to other lab functions. The TBE is a walk away pre- and post-hybridization system that automates FISH slide processing, improves work flow and consistency and saves approximately 3.5h of technologist time. The instrument has a small footprint thus occupying minimal counter space. TBE processed slides performed exceptionally well in comparison to the manual technique with no disagreement in HER2 amplification status. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Arnold, Jeffrey; Clark, Martyn; Gutmann, Ethan; Wood, Andy; Nijssen, Bart; Rasmussen, Roy
2016-04-01
The United States Army Corps of Engineers (USACE) has had primary responsibility for multi-purpose water resource operations on most of the major river systems in the U.S. for more than 200 years. In that time, the USACE projects and programs making up those operations have proved mostly robust against the range of natural climate variability encountered over their operating life spans. However, in some watersheds and for some variables, climate change now is known to be shifting the hydroclimatic baseline around which that natural variability occurs and changing the range of that variability as well. This makes historical stationarity an inappropriate basis for assessing continued project operations under climate-changed futures. That means new hydroclimatic projections are required at multiple scales to inform decisions about specific threats and impacts, and for possible adaptation responses to limit water-resource vulnerabilities and enhance operational resilience. However, projections of possible future hydroclimatologies have myriad complex uncertainties that require explicit guidance for interpreting and using them to inform those decisions about climate vulnerabilities and resilience. Moreover, many of these uncertainties overlap and interact. Recent work, for example, has shown the importance of assessing the uncertainties from multiple sources including: global model structure [Meehl et al., 2005; Knutti and Sedlacek, 2013]; internal climate variability [Deser et al., 2012; Kay et al., 2014]; climate downscaling methods [Gutmann et al., 2012; Mearns et al., 2013]; and hydrologic models [Addor et al., 2014; Vano et al., 2014; Mendoza et al., 2015]. Revealing, reducing, and representing these uncertainties is essential for defining the plausible quantitative climate change narratives required to inform water-resource decision-making. And to be useful, such quantitative narratives, or storylines, of climate change threats and hydrologic impacts must sample from the full range of uncertainties associated with all parts of the simulation chain, from global climate models with simulations of natural climate variability, through regional climate downscaling, and on to modeling of affected hydrologic processes and downstream water resources impacts. This talk will present part of the work underway now both to reveal and reduce some important uncertainties and to develop explicit guidance for future generation of quantitative hydroclimatic storylines. Topics will include: 1- model structural and parameter-set limitations of some methods widely used to quantify climate impacts to hydrologic processes [Gutmann et al., 2014; Newman et al., 2015]; 2- development and evaluation of new, spatially consistent, U.S. national-scale climate downscaling and hydrologic simulation capabilities directly relevant at the multiple scales of water-resource decision-making [Newman et al., 2015; Mizukami et al., 2015; Gutmann et al., 2016]; and 3- development and evaluation of advanced streamflow forecasting methods to reduce and represent integrated uncertainties in a tractable way [Wood et al., 2014; Wood et al., 2015]. A key focus will be areas where climatologic and hydrologic science is currently under-developed to inform decisions - or is perhaps wrongly scaled or misapplied in practice - indicating the need for additional fundamental science and interpretation.
Bio-inspired online variable recruitment control of fluidic artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-12-01
This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.
2015-01-01
Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220
Zhang, Xia; Hu, Changqin
2017-09-08
Penicillins are typical of complex ionic samples which likely contain large number of degradation-related impurities (DRIs) with different polarities and charge properties. It is often a challenge to develop selective and robust high performance liquid chromatography (HPLC) methods for the efficient separation of all DRIs. In this study, an analytical quality by design (AQbD) approach was proposed for stability-indicating method development of cloxacillin. The structures, retention and UV characteristics rules of penicillins and their impurities were summarized and served as useful prior knowledge. Through quality risk assessment and screen design, 3 critical process parameters (CPPs) were defined, including 2 mixture variables (MVs) and 1 process variable (PV). A combined mixture-process variable (MPV) design was conducted to evaluate the 3 CPPs simultaneously and a response surface methodology (RSM) was used to achieve the optimal experiment parameters. A dual gradient elution was performed to change buffer pH, mobile-phase type and strength simultaneously. The design spaces (DSs) was evaluated using Monte Carlo simulation to give their possibility of meeting the specifications of CQAs. A Plackett-Burman design was performed to test the robustness around the working points and to decide the normal operating ranges (NORs). Finally, validation was performed following International Conference on Harmonisation (ICH) guidelines. To our knowledge, this is the first study of using MPV design and dual gradient elution to develop HPLC methods and improve separations for complex ionic samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantization and Quantum-Like Phenomena: A Number Amplitude Approach
NASA Astrophysics Data System (ADS)
Robinson, T. R.; Haven, E.
2015-12-01
Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.
Comparative Study of Microstructure and Properties of Thermal Sprayed MCrAlY Bond Coatings
NASA Astrophysics Data System (ADS)
Inglima, Michael William
A series of experiments were performed in order to observe certain process-property trends in thermally sprayed MCrAlY bond coatings for thermal barrier coating (TBC) applications in gas-turbine engines. Firstly, the basis of gas-turbine operation and design is discussed with a focus on the Brayton cycle and basic thermodynamic properties with respect to both the thermal and fuel efficiency of the turbine. The high-temperature environment inside the gas-turbine engine creates an extremely corrosive medium in which the engineering components must operate with sufficient operating life times. These engineering constraints, both thermal/fuel efficiency and operating life, pose a serious problem during long operation as well as thermal cycling of a civil aerospace engine. The concept of a thermal barrier coating is introduced along with how these coatings protect the internal engineering components, mostly in the hot-section of the turbine, and increase both the efficiency as well as the operating life of the components. The method used to create TBC's is then introduced being thermal spray processing along with standard operating procedures (SOP) used during coating deposition. The main focus of the experiments was to quantify the process-property trends seen during thermal spray processing of TBC's with respect to the adhesion and thermally grown oxide (TGO) layer, as well as how sensitive these properties are to changing variables during coating deposition. The design of experiment (DOE) method was used in order to have sufficient statistical process control over the output as well as a standard method for quantifying the results. A total of three DOE's were performed using two main types of thermal spray processes being high-velocity oxygen fuel (HVOF) and atmospheric plasma spray (APS), with a total of five different types of torches which are categorized by liquid-fuel, gas-fuel, and single cathode plasma. The variables used in the proceeding experiments were mainly spray distance, air/fuel ratio, raster speed, powder feed rate, combustion pressure, current, primary and secondary gas flow, as well as three different powder chemistries. The results of the experiments showed very clear process-property trends with respect to mean bond strength of the coatings as well as TGO growth on the as-sprayed coating surface. The effect of either increasing/decreasing the melting index of the powder as well as increasing/decreasing the kinetic energy of the particles is shown with corresponding cross-sectional microstructures of the coating interfaces. The temperature and velocity of the particles were measured with spray diagnostic sensors as well as using an in-situ curvature property sensor (ICP) to monitor the stress-states of the coatings both during deposition as well as residual stresses, and how these might affect the bond strength. An SOP referred to as furnace cycling was used to quantify the TGO growth of the bond coatings by measuring the thickness via a scanning electron microscope (SEM) as well as performing energy dispersive x-ray spectroscopy (EDX) on the coatings to measure chemical changes.
Apparatus and method for microwave processing of materials
Johnson, Arvid C.; Lauf, Robert J.; Bible, Don W.; Markunas, Robert J.
1996-01-01
A variable frequency microwave heating apparatus (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34) for testing or other selected applications. The variable frequency heating apparatus (10) is used in the method of the present invention to monitor the resonant processing frequency within the furnace cavity (34) depending upon the material, including the state thereof, from which the workpiece (36) is fabricated. The variable frequency microwave heating apparatus (10) includes a microwave signal generator (12) and a high-power microwave amplifier (20) or a microwave voltage-controlled oscillator (14). A power supply (22) is provided for operation of the high-power microwave oscillator (14) or microwave amplifier (20). A directional coupler (24) is provided for detecting the direction and amplitude of signals incident upon and reflected from the microwave cavity (34). A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. Reflected power is dissipated in the reflected power load (28).
Continuous-variable quantum computation with spatial degrees of freedom of photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tasca, D. S.; Gomes, R. M.; Toscano, F.
2011-05-15
We discuss the use of the transverse spatial degrees of freedom of photons propagating in the paraxial approximation for continuous-variable information processing. Given the wide variety of linear optical devices available, a diverse range of operations can be performed on the spatial degrees of freedom of single photons. Here we show how to implement a set of continuous quantum logic gates which allow for universal quantum computation. In contrast with the usual quadratures of the electromagnetic field, the entire set of single-photon gates for spatial degrees of freedom does not require optical nonlinearity and, in principle, can be performed withmore » a single device: the spatial light modulator. Nevertheless, nonlinear optical processes, such as four-wave mixing, are needed in the implementation of two-photon gates. The efficiency of these gates is at present very low; however, small-scale investigations of continuous-variable quantum computation are within the reach of current technology. In this regard, we show how novel cluster states for one-way quantum computing can be produced using spontaneous parametric down-conversion.« less
Miller, Mark W; Elliott, Matt; DeArmond, Jon; Kinyua, Maureen; Wett, Bernhard; Murthy, Sudhir; Bott, Charles B
2017-06-01
The pursuit of fully autotrophic nitrogen removal via the anaerobic ammonium oxidation (anammox) pathway has led to an increased interest in carbon removal technologies, particularly the A-stage of the adsorption/bio-oxidation (A/B) process. The high-rate operation of the A-stage and lack of automatic process control often results in wide variations of chemical oxygen demand (COD) removal that can ultimately impact nitrogen removal in the downstream B-stage process. This study evaluated the use dissolved oxygen (DO) and mixed liquor suspended solids (MLSS) based automatic control strategies through the use of in situ on-line sensors in the A-stage of an A/B pilot study. The objective of using these control strategies was to reduce the variability of COD removal by the A-stage and thus the variability of the effluent C/N. The use of cascade DO control in the A-stage did not impact COD removal at the conditions tested in this study, likely because the bulk DO concentration (>0.5 mg/L) was maintained above the half saturation coefficient of heterotrophic organisms for DO. MLSS-based solids retention time (SRT) control, where MLSS was used as a surrogate for SRT, did not significantly reduce the effluent C/N variability but it was able to reduce COD removal variation in the A-stage by 90%.
Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo
2014-05-19
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.
NASA Technical Reports Server (NTRS)
Gallo, C.; Kasuba, R.; Pintz, A.; Spring, J.
1986-01-01
The dynamic analysis of a horizontal axis fixed pitch wind turbine generator (WTG) rated at 56 kW is discussed. A mechanical Continuously Variable Transmission (CVT) was incorporated in the drive train to provide variable speed operation capability. One goal of the dynamic analysis was to determine if variable speed operation, by means of a mechanical CVT, is capable of capturing the transient power in the WTG/wind environment. Another goal was to determine the extent of power regulation possible with CVT operation.
Predictive displays for a process-control schematic interface.
Yin, Shanqing; Wickens, Christopher D; Helander, Martin; Laberge, Jason C
2015-02-01
Our objective was to examine the extent to which increasing precision of predictive (rate of change) information in process control will improve performance on a simulated process-control task. Predictive displays have been found to be useful in process control (as well as aviation and maritime industries). However, authors of prior research have not examined the extent to which predictive value is increased by increasing predictor resolution, nor has such research tied potential improvements to changes in process control strategy. Fifty nonprofessional participants each controlled a simulated chemical mixture process (honey mixer simulation) that simulated the operations found in process control. Participants in each of five groups controlled with either no predictor or a predictor ranging in the resolution of prediction of the process. Increasing detail resolution generally increased the benefit of prediction over the control condition although not monotonically so. The best overall performance, combining quality and predictive ability, was obtained by the display of intermediate resolution. The two displays with the lowest resolution were clearly inferior. Predictors with higher resolution are of value but may trade off enhanced sensitivity to variable change (lower-resolution discrete state predictor) with smoother control action (higher-resolution continuous predictors). The research provides guidelines to the process-control industry regarding displays that can most improve operator performance.
Spatial Data Exploring by Satellite Image Distributed Processing
NASA Astrophysics Data System (ADS)
Mihon, V. D.; Colceriu, V.; Bektas, F.; Allenbach, K.; Gvilava, M.; Gorgan, D.
2012-04-01
Our society needs and environmental predictions encourage the applications development, oriented on supervising and analyzing different Earth Science related phenomena. Satellite images could be explored for discovering information concerning land cover, hydrology, air quality, and water and soil pollution. Spatial and environment related data could be acquired by imagery classification consisting of data mining throughout the multispectral bands. The process takes in account a large set of variables such as satellite image types (e.g. MODIS, Landsat), particular geographic area, soil composition, vegetation cover, and generally the context (e.g. clouds, snow, and season). All these specific and variable conditions require flexible tools and applications to support an optimal search for the appropriate solutions, and high power computation resources. The research concerns with experiments on solutions of using the flexible and visual descriptions of the satellite image processing over distributed infrastructures (e.g. Grid, Cloud, and GPU clusters). This presentation highlights the Grid based implementation of the GreenLand application. The GreenLand application development is based on simple, but powerful, notions of mathematical operators and workflows that are used in distributed and parallel executions over the Grid infrastructure. Currently it is used in three major case studies concerning with Istanbul geographical area, Rioni River in Georgia, and Black Sea catchment region. The GreenLand application offers a friendly user interface for viewing and editing workflows and operators. The description involves the basic operators provided by GRASS [1] library as well as many other image related operators supported by the ESIP platform [2]. The processing workflows are represented as directed graphs giving the user a fast and easy way to describe complex parallel algorithms, without having any prior knowledge of any programming language or application commands. Also this Web application does not require any kind of install for what the house-hold user is concerned. It is a remote application which may be accessed over the Internet. Currently the GreenLand application is available through the BSC-OS Portal provided by the enviroGRIDS FP7 project [3]. This presentation aims to highlight the challenges and issues of flexible description of the Grid based processing of satellite images, interoperability with other software platforms available in the portal, as well as the particular requirements of the Black Sea related use cases.
NASA's In-Space Manufacturing Project: Materials and Manufacturing Process Development Update
NASA Technical Reports Server (NTRS)
Prater, Tracie; Bean, Quincy; Werkheiser, Niki; Ledbetter, Frank
2017-01-01
The mission of NASA's In-Space Manufacturing (ISM) project is to identify, design, and implement on-demand, sustainable manufacturing solutions for fabrication, maintenance and repair during exploration missions. ISM has undertaken a phased strategy of incrementally increasing manufacturing capabilities to achieve this goal. The ISM project began with the development of the first 3D printer for the International Space Station. To date, the printer has completed two phases of flight operations. Results from phase I specimens indicated some differences in material properties between ground-processed and ISS-processed specimens, but results of follow-on analyses of these parts and a ground-based study with an equivalent printer strongly indicate that this variability is likely attributable to differences in manufacturing process settings between the ground and flight prints rather than microgravity effects on the fused deposition modeling (FDM) process. Analysis of phase II specimens from the 3D Printing in Zero G tech demo, which shed further light on the sources of material variability, will be presented. The ISM project has also developed a materials characterization plan for the Additive Manufacturing Facility, the follow-on commercial multimaterial 3D printing facility developed for ISS by Made in Space. This work will yield a suite of characteristic property values that can inform use of AMF by space system designers. Other project activities include development of an integrated 3D printer and recycler, known as the Refabricator, by Tethers Unlimited, which will be operational on ISS in 2018. The project also recently issued a broad area announcement for a multimaterial fabrication laboratory, which may include in-space manufacturing capabilities for metals, electronics, and polymeric materials, to be deployed on ISS in the 2022 timeframe.
Trojanowicz, K; Plaza, E; Trela, J
2017-11-09
In the paper, the extension of mathematical model of partial nitritation-anammox process in a moving bed biofilm reactor (MBBR) is presented. The model was calibrated with a set of kinetic, stoichiometric and biofilm parameters, whose values were taken from the literature and batch tests. The model was validated with data obtained from: laboratory batch experiments, pilot-scale MBBR for a reject water deammonification operated at Himmerfjärden wastewater treatment and pilot-scale MBBR for mainstream wastewater deammonification at Hammarby Sjöstadsverk research facility, Sweden. Simulations were conducted in AQUASIM software. The proposed, extended model proved to be useful for simulating of partial nitritation/anammox process in biofilm reactor both for reject water and mainstream wastewater at variable substrate concentrations (influent total ammonium-nitrogen concentration of 530 ± 68; 45 ± 2.6 and 38 ± 3 gN/m 3 - for reject water - and two cases of mainstream wastewater treatment, respectively), temperature (24 ± 2.8; 15 ± 1.1 and 18 ± 0.5°C), pH (7.8 ± 0.2; 7.3 ± 0.1 and 7.4 ± 0.1) and aeration patterns (continuous aeration and intermittent aeration with variable dissolved oxygen concentrations and length of aerated and anoxic phases). The model can be utilized for optimizing and testing different operational strategies of deammonification process in biofilm systems.
System and Method for Monitoring Distributed Asset Data
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2015-01-01
A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.
Approach for Configuring a Standardized Vessel for Processing Radioactive Waste Slurries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bamberger, Judith A.; Enderlin, Carl W.; Minette, Michael J.
2015-09-10
A standardized vessel design is being considered at the Waste Treatment and Immobilization Plant (WTP) that is under construction at Hanford, Washington. The standardized vessel design will be used for storing, blending, and chemical processing of slurries that exhibit a variable process feed including Newtonian to non-Newtonian rheologies over a range of solids loadings. Developing a standardized vessel is advantageous and reduces the testing required to evaluate the performance of the design. The objectives of this paper are to: 1) present a design strategy for developing a standard vessel mixing system design for the pretreatment portion of the waste treatmentmore » plant that must process rheologically and physically challenging process streams, 2) identify performance criteria that the design for the standard vessel must satisfy, 3) present parameters that are to be used for assessing the performance criteria, and 4) describe operation of the selected technology. Vessel design performance will be assessed for both Newtonian and non-Newtonian simulants which represent a range of waste types expected during operation. Desired conditions for the vessel operations are the ability to shear the slurry so that flammable gas does not accumulate within the vessel, that settled solids will be mobilized, that contents can be blended, and that contents can be transferred from the vessel. A strategy is presented for adjusting the vessel configuration to ensure that all these conditions are met.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holcomb, David Eugene
2015-01-01
Fluoride salt-cooled High temperature Reactors (FHRs) are entering into early phase engineering development. Initial candidate technologies have been identified to measure all of the required process variables. The purpose of this paper is to describe the proposed measurement techniques in sufficient detail to enable assessment of the proposed instrumentation suite and to support development of the component technologies. This paper builds upon the instrumentation chapter of the recently published FHR technology development roadmap. Locating instruments outside of the intense core radiation and high-temperature fluoride salt environment significantly decreases their environmental tolerance requirements. Under operating conditions, FHR primary coolant salt ismore » a transparent, low-vapor-pressure liquid. Consequently, FHRs can employ standoff optical measurements from above the salt pool to assess in-vessel conditions. For example, the core outlet temperature can be measured by observing the fuel s blackbody emission. Similarly, the intensity of the core s Cerenkov glow indicates the fission power level. Short-lived activation of the primary coolant provides another means for standoff measurements of process variables. The primary coolant flow and neutron flux can be measured using gamma spectroscopy along the primary coolant piping. FHR operation entails a number of process measurements. Reactor thermal power and core reactivity are the most significant variables for process control. Thermal power can be determined by measuring the primary coolant mass flow rate and temperature rise across the core. The leading candidate technologies for primary coolant temperature measurement are Au-Pt thermocouples and Johnson noise thermometry. Clamp-on ultrasonic flow measurement, that includes high-temperature tolerant standoffs, is a potential coolant flow measurement technique. Also, the salt redox condition will be monitored as an indicator of its corrosiveness. Both electrochemical techniques and optical spectroscopy are candidate fluoride salt redox measurement methods. Coolant level measurement can be performed using radar-level gauges located in standpipes above the reactor vessel. While substantial technical development remains for most of the instruments, industrially compatible instruments based upon proven technology can be reasonably extrapolated from the current state of the art.« less
Huang, Mingzhi; Wan, Jinquan; Hu, Kang; Ma, Yongwen; Wang, Yan
2013-12-01
An on-line hybrid fuzzy-neural soft-sensing model-based control system was developed to optimize dissolved oxygen concentration in a bench-scale anaerobic/anoxic/oxic (A(2)/O) process. In order to improve the performance of the control system, a self-adapted fuzzy c-means clustering algorithm and adaptive network-based fuzzy inference system (ANFIS) models were employed. The proposed control system permits the on-line implementation of every operating strategy of the experimental system. A set of experiments involving variable hydraulic retention time (HRT), influent pH (pH), dissolved oxygen in the aerobic reactor (DO), and mixed-liquid return ratio (r) was carried out. Using the proposed system, the amount of COD in the effluent stabilized at the set-point and below. The improvement was achieved with optimum dissolved oxygen concentration because the performance of the treatment process was optimized using operating rules implemented in real time. The system allows various expert operational approaches to be deployed with the goal of minimizing organic substances in the outlet while using the minimum amount of energy.
Field evaluation of a prototype paper-based point-of-care fingerstick transaminase test.
Pollock, Nira R; McGray, Sarah; Colby, Donn J; Noubary, Farzad; Nguyen, Huyen; Nguyen, The Anh; Khormaee, Sariah; Jain, Sidhartha; Hawkins, Kenneth; Kumar, Shailendra; Rolland, Jason P; Beattie, Patrick D; Chau, Nguyen V; Quang, Vo M; Barfield, Cori; Tietje, Kathy; Steele, Matt; Weigl, Bernhard H
2013-01-01
Monitoring for drug-induced liver injury (DILI) via serial transaminase measurements in patients on potentially hepatotoxic medications (e.g., for HIV and tuberculosis) is routine in resource-rich nations, but often unavailable in resource-limited settings. Towards enabling universal access to affordable point-of-care (POC) screening for DILI, we have performed the first field evaluation of a paper-based, microfluidic fingerstick test for rapid, semi-quantitative, visual measurement of blood alanine aminotransferase (ALT). Our objectives were to assess operational feasibility, inter-operator variability, lot variability, device failure rate, and accuracy, to inform device modification for further field testing. The paper-based ALT test was performed at POC on fingerstick samples from 600 outpatients receiving HIV treatment in Vietnam. Results, read independently by two clinic nurses, were compared with gold-standard automated (Roche Cobas) results from venipuncture samples obtained in parallel. Two device lots were used sequentially. We demonstrated high inter-operator agreement, with 96.3% (95% C.I., 94.3-97.7%) agreement in placing visual results into clinically-defined "bins" (<3x, 3-5x, and >5x upper limit of normal), >90% agreement in validity determination, and intraclass correlation coefficient of 0.89 (95% C.I., 0.87-0.91). Lot variability was observed in % invalids due to hemolysis (21.1% for Lot 1, 1.6% for Lot 2) and correlated with lots of incorporated plasma separation membranes. Invalid rates <1% were observed for all other device controls. Overall bin placement accuracy for the two readers was 84% (84.3%/83.6%). Our findings of extremely high inter-operator agreement for visual reading-obtained in a target clinical environment, as performed by local practitioners-indicate that the device operation and reading process is feasible and reproducible. Bin placement accuracy and lot-to-lot variability data identified specific targets for device optimization and material quality control. This is the first field study performed with a patterned paper-based microfluidic device and opens the door to development of similar assays for other important analytes.
A method for predicting optimized processing parameters for surfacing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupont, J.N.; Marder, A.R.
1994-12-31
Welding is used extensively for surfacing applications. To operate a surfacing process efficiently, the variables must be optimized to produce low levels of dilution with the substrate while maintaining high deposition rates. An equation for dilution in terms of the welding variables, thermal efficiency factors, and thermophysical properties of the overlay and substrate was developed by balancing energy and mass terms across the welding arc. To test the validity of the resultant dilution equation, the PAW, GTAW, GMAW, and SAW processes were used to deposit austenitic stainless steel onto carbon steel over a wide range of parameters. Arc efficiency measurementsmore » were conducted using a Seebeck arc welding calorimeter. Melting efficiency was determined based on knowledge of the arc efficiency. Dilution was determined for each set of processing parameters using a quantitative image analysis system. The pertinent equations indicate dilution is a function of arc power (corrected for arc efficiency), filler metal feed rate, melting efficiency, and thermophysical properties of the overlay and substrate. With the aid of the dilution equation, the effect of processing parameters on dilution is presented by a new processing diagram. A new method is proposed for determining dilution from welding variables. Dilution is shown to depend on the arc power, filler metal feed rate, arc and melting efficiency, and the thermophysical properties of the overlay and substrate. Calculated dilution levels were compared with measured values over a large range of processing parameters and good agreement was obtained. The results have been applied to generate a processing diagram which can be used to: (1) predict the maximum deposition rate for a given arc power while maintaining adequate fusion with the substrate, and (2) predict the resultant level of dilution with the substrate.« less
Telfer, Scott; Gibson, Kellie S; Hennessy, Kym; Steultjens, Martijn P; Woodburn, Jim
2012-05-01
To determine, for a number of techniques used to obtain foot shape based around plaster casting, foam box impressions, and 3-dimensional scanning, (1) the effect the technique has on the overall reproducibility of custom foot orthoses (FOs) in terms of inter- and intracaster reliability and (2) the reproducibility of FO design by using computer-aided design (CAD) software in terms of inter- and intra-CAD operator reliability for all these techniques. Cross-sectional study. University laboratory. Convenience sample of individuals (N=22) with noncavus foot types. Not applicable. Parameters of the FO design (length, width at forefoot, width at rearfoot, and peak medial arch height), the forefoot to rearfoot angle of the foot shape, and overall volume match between device designs. For intra- and intercaster reliability of the different methods of obtaining the foot shape, all methods fell below the reproducibility quality threshold for the medial arch height of the device, and volume matching was <80% for all methods. The more experienced CAD operator was able to achieve excellent reliability (intraclass correlation coefficients >0.75) for all variables with the exception of forefoot to rearfoot angle, with overall volume matches of >87% of the devices. None of the techniques for obtaining foot shape met all the criteria for excellent reproducibility, with the peak arch height being particularly variable. Additional variability is added at the CAD stage of the FO design process, although with adequate operator experience good to excellent reproducibility may be achieved at this stage. Taking only basic linear or angular measurement parameters from the device may fail to fully capture the variability in FO design. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westendorf, Tiffany; Buddle, Stanlee; Caraher, Joel
The objective of this project is to design and build a bench-scale process for a novel phase-changing aminosilicone-based CO 2-capture solvent. The project will establish scalability and technical and economic feasibility of using a phase-changing CO 2-capture absorbent for post-combustion capture of CO 2 from coal-fired power plants. The U.S. Department of Energy’s goal for Transformational Carbon Capture Technologies is the development of technologies available for demonstration by 2025 that can capture 90% of emitted CO 2 with at least 95% CO 2 purity for less than $40/tonne of CO 2 captured. In the first budget period of the project,more » the bench-scale phase-changing CO2 capture process was designed using data and operating experience generated under a previous project (ARPA-e project DE-AR0000084). Sizing and specification of all major unit operations was completed, including detailed process and instrumentation diagrams. The system was designed to operate over a wide range of operating conditions to allow for exploration of the effect of process variables on CO 2 capture performance. In the second budget period of the project, individual bench-scale unit operations were tested to determine the performance of each of each unit. Solids production was demonstrated in dry simulated flue gas across a wide range of absorber operating conditions, with single stage CO 2 conversion rates up to 75mol%. Desorber operation was demonstrated in batch mode, resulting in desorption performance consistent with the equilibrium isotherms for GAP-0/CO 2 reaction. Important risks associated with gas humidity impact on solids consistency and desorber temperature impact on thermal degradation were explored, and adjustments to the bench-scale process were made to address those effects. Corrosion experiments were conducted to support selection of suitable materials of construction for the major unit operations in the process. The bench scale unit operations were assembled into a continuous system to support steady state system testing. In the third budget period of the project, continuous system testing was conducted, including closed-loop operation of the absorber and desober systems. Slurries of GAP-0/GAP-0 carbamate/water mixtures produced in the absorber were pumped successfully to the desorber unit, and regenerated solvent was returned to the absorber. A techno-economic analysis, EH&S risk assessment, and solvent manufacturability study were completed.« less
Garren, Madeleine V; Sexauer, Stephen B; Page, Terry L
2013-01-01
There have been several studies on the role of circadian clocks in the regulation of associative learning and memory processes in both vertebrate and invertebrate species. The results have been quite variable and at present it is unclear to what extent the variability observed reflects species differences or differences in methodology. Previous results have shown that following differential classical conditioning in the cockroach, Rhyparobia maderae, in an olfactory discrimination task, formation of the short-term and long-term memory is under strict circadian control. In contrast, there appeared to be no circadian regulation of the ability to recall established memories. In the present study, we show that following operant conditioning of the same species in a very similar olfactory discrimination task, there is no impact of the circadian system on either short-term or long-term memory formation. On the other hand, ability to recall established memories is strongly tied to the circadian phase of training. On the basis of these data and those previously reported for phylogenetically diverse species, it is suggested that there may be fundamental differences in the way the circadian system regulates learning and memory in classical and operant conditioning.
Tacholess order-tracking approach for wind turbine gearbox fault detection
NASA Astrophysics Data System (ADS)
Wang, Yi; Xie, Yong; Xu, Guanghua; Zhang, Sicong; Hou, Chenggang
2017-09-01
Monitoring of wind turbines under variable-speed operating conditions has become an important issue in recent years. The gearbox of a wind turbine is the most important transmission unit; it generally exhibits complex vibration signatures due to random variations in operating conditions. Spectral analysis is one of the main approaches in vibration signal processing. However, spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions. This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications. Although order-tracking methods have been proposed for wind turbine fault detection in recent years, current methods are only applicable to cases in which the instantaneous shaft phase is available. For wind turbines with limited structural spaces, collecting phase signals with tachometers or encoders is difficult. In this study, a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques. The proposed method extracts the instantaneous phase from the vibration signal, resamples the signal at equiangular increments, and calculates the order spectrum for wind turbine fault identification. The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.
Garren, Madeleine V.; Sexauer, Stephen B.; Page, Terry L.
2013-01-01
There have been several studies on the role of circadian clocks in the regulation of associative learning and memory processes in both vertebrate and invertebrate species. The results have been quite variable and at present it is unclear to what extent the variability observed reflects species differences or differences in methodology. Previous results have shown that following differential classical conditioning in the cockroach, Rhyparobia maderae, in an olfactory discrimination task, formation of the short-term and long-term memory is under strict circadian control. In contrast, there appeared to be no circadian regulation of the ability to recall established memories. In the present study, we show that following operant conditioning of the same species in a very similar olfactory discrimination task, there is no impact of the circadian system on either short-term or long-term memory formation. On the other hand, ability to recall established memories is strongly tied to the circadian phase of training. On the basis of these data and those previously reported for phylogenetically diverse species, it is suggested that there may be fundamental differences in the way the circadian system regulates learning and memory in classical and operant conditioning. PMID:23533587
Inflow forecasting model construction with stochastic time series for coordinated dam operation
NASA Astrophysics Data System (ADS)
Kim, T.; Jung, Y.; Kim, H.; Heo, J. H.
2014-12-01
Dam inflow forecasting is one of the most important tasks in dam operation for an effective water resources management and control. In general, dam inflow forecasting with stochastic time series model is possible to apply when the data is stationary because most of stochastic process based on stationarity. However, recent hydrological data cannot be satisfied the stationarity anymore because of climate change. Therefore a stochastic time series model, which can consider seasonality and trend in the data series, named SARIMAX(Seasonal Autoregressive Integrated Average with eXternal variable) model were constructed in this study. This SARIMAX model could increase the performance of stochastic time series model by considering the nonstationarity components and external variable such as precipitation. For application, the models were constructed for four coordinated dams on Han river in South Korea with monthly time series data. As a result, the models of each dam have similar performance and it would be possible to use the model for coordinated dam operation.Acknowledgement This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-NH-12-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
Simulation Of Assembly Processes With Technical Of Virtual Reality
NASA Astrophysics Data System (ADS)
García García, Manuel; Arenas Reina, José Manuel; Lite, Alberto Sánchez; Sebastián Pérez, Miguel Ángel
2009-11-01
Virtual reality techniques use at industrial processes provides a real approach to product life cycle. For components manual assembly, the use of virtual surroundings facilitates a simultaneous engineering in which variables such as human factors and productivity take a real act. On the other hand, in the actual phase of industrial competition it is required a rapid adjustment to client needs and to market situation. In this work it is analyzed the assembly of the front components of a vehicle using virtual reality tools and following up a product-process design methodology which includes every life service stage. This study is based on workstations design, taking into account productive and human factors from the ergonomic point of view implementing a postural study of every assembly operation, leaving the rest of stages for a later study. Design is optimized applying this methodology together with the use of virtual reality tools. It is also achieved a 15% reduction on time assembly and of 90% reduction in muscle—skeletal diseases at every assembly operation.
Ethanol Seeking by Long Evans Rats Is Not Always a Goal-Directed Behavior
Mangieri, Regina A.; Cofresí, Roberto U.; Gonzales, Rueben A.
2012-01-01
Background Two parallel and interacting processes are said to underlie animal behavior, whereby learning and performance of a behavior is at first via conscious and deliberate (goal-directed) processes, but after initial acquisition, the behavior can become automatic and stimulus-elicited (habitual). With respect to instrumental behaviors, animal learning studies suggest that the duration of training and the action-outcome contingency are two factors involved in the emergence of habitual seeking of “natural” reinforcers (e.g., sweet solutions, food or sucrose pellets). To rigorously test whether behaviors reinforced by abused substances such as ethanol, in particular, similarly become habitual was the primary aim of this study. Methodology/Principal Findings Male Long Evans rats underwent extended or limited operant lever press training with 10% sucrose/10% ethanol (10S10E) reinforcement (variable interval (VI) or (VR) ratio schedule of reinforcement), or with 10% sucrose (10S) reinforcement (VI schedule only). Once training and pretesting were complete, the impact of outcome devaluation on operant behavior was evaluated after lithium chloride injections were paired with the reinforcer, or unpaired 24 hours later. After limited, but not extended instrumental training, lever pressing by groups trained under VR with 10S10E and under VI with 10S was sensitive to outcome devaluation. In contrast, responding by both the extended and limited training 10S10E VI groups was not sensitive to ethanol devaluation during the test for habitual behavior. Conclusions/Significance Operant behavior by rats trained to self-administer an ethanol-sucrose solution showed variable sensitivity to a change in the value of ethanol, with relative insensitivity developing sooner in animals that received time-variable ethanol reinforcement during training sessions. One important implication, with respect to substance abuse in humans, is that initial learning about the relationship between instrumental actions and the opportunity to consume ethanol-containing drinks can influence the time course for the development or expression of habitual ethanol seeking behavior. PMID:22870342
Distributed snow modeling suitable for use with operational data for the American River watershed.
NASA Astrophysics Data System (ADS)
Shamir, E.; Georgakakos, K. P.
2004-12-01
The mountainous terrain of the American River watershed (~4300 km2) at the Western slope of the Northern Sierra Nevada is subject to significant variability in the atmospheric forcing that controls the snow accumulation and ablations processes (i.e., precipitation, surface temperature, and radiation). For a hydrologic model that attempts to predict both short- and long-term streamflow discharges, a plausible description of the seasonal and intermittent winter snow pack accumulation and ablation is crucial. At present the NWS-CNRFC operational snow model is implemented in a semi distributed manner (modeling unit of about 100-1000 km2) and therefore lump distinct spatial variability of snow processes. In this study we attempt to account for the precipitation, temperature, and radiation spatial variability by constructing a distributed snow accumulation and melting model suitable for use with commonly available sparse data. An adaptation of the NWS-Snow17 energy and mass balance that is used operationally at the NWS River Forecast Centers is implemented at 1 km2 grid cells with distributed input and model parameters. The input to the model (i.e., precipitation and surface temperature) is interpolated from observed point data. The surface temperature was interpolated over the basin based on adiabatic lapse rates using topographic information whereas the precipitation was interpolated based on maps of climatic mean annual rainfall distribution acquired from PRISM. The model parameters that control the melting rate due to radiation were interpolated based on aspect. The study was conducted for the entire American basin for the snow seasons of 1999-2000. Validation of the Snow Water Equivalent (SWE) prediction is done by comparing to observation from 12 snow Sensors. The Snow Cover Area (SCA) prediction was evaluated by comparing to remotely sensed 500m daily snow cover derived from MODIS. The results that the distribution of snow over the area is well captured and the quantity compared to the snow gauges are well estimated in the high elevation.
Transition of NOAA's GPS-Met Data Acquisition and Processing System to the Commercial Sector
NASA Astrophysics Data System (ADS)
Jackson, M. E.; Holub, K.; Callahan, W.; Blatt, S.
2014-12-01
In April of 2014, NOAA/OAR/ESRL Global Systems Division (GSD) and Trimble, in collaboration with Earth Networks, Inc. (ENI) signed a Cooperative Research and Development Agreement (CRADA) to transfer the existing NOAA GPS-Met Data Acquisition and Processing System (GPS-Met DAPS) technology to a commercial Trimble/ENI partnership. NOAA's GPS-Met DAPS is currently operated in a pseudo-operational mode but has proven highly reliable and running at over 95% uptime. The DAPS uses the GAMIT software to ingest dual frequency carrier phase GPS/GNSS observations and ancillary information such as real-time satellite orbits to estimate the zenith-scaled tropospheric (ZTD) signal delays and, where surface MET data are available, retrieve integrated precipitable water vapor (PWV). The NOAA data and products are made available to end users in near real-time. The Trimble/ENI partnership will use the Trimble Pivot™ software with the Atmosphere App to calculate zenith tropospheric (ZTD), tropospheric slant delay, and integrated precipitable water vapor (PWV). Evaluation of the Trimble software is underway starting with a comparison of ZTD and PWV values determined from GPS stations located near NOAA Radiosonde Observation (Upper-Air Observation) launch sites. A success metric was established that requires Trimble's PWV estimates to match ESRL/GSD's to within 1.5 mm 95% of the time, which corresponds to a ZTD uncertainty of less than 10 mm 95% of the time. Initial results indicate that Trimble/ENI data meet and exceed the ZTD metric, but for some stations PWV estimates are out of specification. These discrepancies are primarily due to how offsets between MET and GPS stations are handled and are easily resolved. Additional test networks are proposed that include low terrain/high moisture variability stations, high terrain/low moisture variability stations, as well as high terrain/high moisture variability stations. We will present results from further testing along with a timeline for the transition of the GPS-Met DAPS to an operational commercial service.
ARAGO: a robotic observatrory for the variable sky
NASA Astrophysics Data System (ADS)
Boer, Michel; Acker, Agnes; Atteia, Jean-Luc; Buchholtz, Gilles; Colas, Francois; Deleuil, Magali; Dennefeld, Michel; Desert, Jean-Michel; Dolez, Noel; Eysseric, J.; Ferlet, Roger; Ferrari, Marc; Jean, Pierre; Klotz, Alain; Kouach, Driss; Lecavelier des Etangs, Alain; Lemaitre, Gerard R.; Marcowith, Alexandre; Marquette, Jean-Babtiste; Meunier, Jean-Pierre; Mochkovitch, Robert; Pain, Reynald; Pares, Laurent; Pinna, Henri; Pinna, Roger; Provost, Lionel; Roques, Sylvie; Schneider, Jean; Sivan, Jean-Pierre; Soubiran, Caroline; Thiebaut, Carole; Vauclair, Gerard; Verchere, Richard; Vidal-Madjar, Alfred
2002-12-01
We present the Advanced Robotic Agile Observatory (ARAGO), a project for a large variability survey of the sky, in the range 10-8Hz (year) to 1Hz. Among its scientific objectives are the detection of cosmic gamma-ray bursts, both on alert and serendipitously, orphan afterglows, extrasolar planets, AGNs, quasar microlensing, variable and flare stars, trans-neptunian asteroids, Earth-grazers, orbital debris, etc. A large Education and Public Outreach program will be an important part of the project. The telescope itself will be made of Silicon Carbide, allowing, among other advantages, a very light weight and agile capabilities. ARAGO will be fully autonomous, i.e. there will be no human intervention from the request to the data processing and result dissemination, nor to assist night or day operations. ARAGO will start routine observation by mid-2005.
Jiménez, L; Angulo, V; Caparrós, S; Ariza, J
2007-12-01
The influence of operational variables in the pulping of vine shoots by use of ethanolamine [viz. temperature (155-185 degrees C), cooking time (30-90min) and ethanolamine concentration (50-70% v/v)] on the properties of the resulting pulp (viz. yield, kappa index, viscosity and drainability) was studied. A central composite factorial design was used in conjunction with the software BMDP and ANFIS Edit Matlab 6.5 to develop polynomial and fuzzy neural models that reproduced the experimental results of the dependent variables with errors less than 10%. Both types of models are therefore effective with a view to simulating the ethanolamine pulping process. Based on the proposed equations, the best choice is to use values of the operational valuables resulting in near-optimal pulp properties while saving energy and immobilized capital on industrial facilities by using lower temperatures and shorter processing times. One combination leading to near-optimal properties with reduced costs is using a temperature of 180 degrees C and an ethanolamine concentration of 60% for 60min, to obtain pulp with a viscosity of 6.13% lower than the maximum value (932.8ml/g) and a drainability of 5.49% lower than the maximum value (71 (o)SR).
NASA Astrophysics Data System (ADS)
Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng
2016-09-01
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
Navigating a Mobile Robot Across Terrain Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Howard, Ayanna; Bon, Bruce
2003-01-01
A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.
Investigation of Recombination Processes In A Magnetized Plasma
NASA Technical Reports Server (NTRS)
Chavers, Greg; Chang-Diaz, Franklin; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Interplanetary travel requires propulsion systems that can provide high specific impulse (Isp), while also having sufficient thrust to rapidly accelerate large payloads. One such propulsion system is the Variable Specific Impulse Magneto-plasma Rocket (VASIMR), which creates, heats, and exhausts plasma to provide variable thrust and Isp, optimally meeting the mission requirements. A large fraction of the energy to create the plasma is frozen in the exhaust in the form of ionization energy. This loss mechanism is common to all electromagnetic plasma thrusters and has an impact on their efficiency. When the device operates at high Isp, where the exhaust kinetic energy is high compared to the ionization energy, the frozen flow component is of little consequence; however, at low Isp, the effect of the frozen flow may be important. If some of this energy could be recovered through recombination processes, and re-injected as neutral kinetic energy, the efficiency of VASIMR, in its low Isp/high thrust mode may be improved. In this operating regime, the ionization energy is a large portion of the total plasma energy. An experiment is being conducted to investigate the possibility of recovering some of the energy used to create the plasma. This presentation will cover the progress and status of the experiment involving surface recombination of the plasma.
Understanding Skill in EVA Mass Handling. Volume 2; Empirical Investigation
NASA Technical Reports Server (NTRS)
Riccio, Gary; McDonald, Vernon; Peters, Brian; Layne, Charles; Bloomberg, Jacob
1997-01-01
In this report we describe the details of our empirical protocol effort investigating skill in extravehicular mass handling using NASA's principal mass handling simulator, the precision air bearing floor. Contents of this report include a description of the necessary modifications to the mass handling simulator; choice of task, and the description of an operationally relevant protocol. Our independent variables are presented in the context of the specific operational issues they were designed to simulate. The explanation of our dependent variables focuses on the specific data processing procedures used to transform data from common laboratory instruments into measures that are relevant to a special class of nested control systems (discussed in Volume 1): manual interactions between an individual and the substantial environment. The data reduction is explained in the context of the theoretical foundation described in Volume 1. Finally as a preface to the presentation of the empirical data in Volume 3 of this report series, a set of detailed hypotheses is presented.
Matias-Guiu, Pau; Rodríguez-Bencomo, Juan José; Orriols, Ignacio; Pérez-Correa, José Ricardo; López, Francisco
2016-12-15
The organoleptic quality of wine distillates depends on raw materials and the distillation process. Previous work has shown that rectification columns in batch distillation with fixed reflux rate are useful to obtain distillates or distillate fractions with enhanced organoleptic characteristics. This study explores variable reflux rate operating strategies to increase the levels of terpenic compounds in specific distillate fractions to emphasize its floral aroma. Based on chemical and sensory analyses, two distillate heart sub-fractions obtained with the best operating strategy found, were compared with a distillate obtained in a traditional alembic. Results have shown that a drastic reduction of the reflux rate at an early stage of the heart cut produced a distillate heart sub-fraction with a higher concentration of terpenic compounds and lower levels of negative aroma compounds. Therefore, this sub-fraction presented a much more noticeable floral aroma than the distillate obtained with a traditional alembic. Copyright © 2016 Elsevier Ltd. All rights reserved.
Players' perceptions of accountability factors in secondary school sports settings.
Hastie, P A
1993-06-01
The purpose of this study was to gauge the extent to which students believed that the accountability strategies employed by their coaches had significant effects on their involvement in sports training sessions. Questionnaire data from 235 secondary school athletes were analyzed using linear structural relations to test a model of accountability hypothesized as operating in these coaching settings. The accountability strategy of active instruction was found to be a variable that significantly affected the students' valuing of their coaches as well as their task involvement. However, the rewards/consequences variable was not found to be a predictor of valuing or task involvement, suggesting that these athletes seemed more task oriented than reliant on external sanctions. The results of this study can only be generalized to team sport settings. Detailed examination needs to be made of the processes through which accountability factors operate for other contexts, including individual sports and competitive levels. Further research could also be undertaken into gender differences, especially in relation to the gender of coaches.
Status and Evaluation of Microwave Furnace Capabilities at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Lizcano, Maricela; Mackey, Jonathan A.
2014-01-01
The microwave (MW) furnace is a HY-Tech Microwave Systems, 2 kW 2.45 GHz Single Mode Microwave Applicator operating in continuous wave (CW) with variable power. It is located in Cleveland, Ohio at NASA Glenn Research Center. Until recently, the furnace capabilities had not been fully realized due to unknown failure that subsequently damaged critical furnace components. Although the causes of the problems were unknown, an assessment of the furnace itself indicated operational failure may have been partially caused by power quality. This report summarizes the status of the MW furnace and evaluates its capabilities in materials processing.
Factors contributing to variability in larval ingress of Atlantic menhaden, Brevoortia tyrannus
NASA Astrophysics Data System (ADS)
Lozano, C.; Houde, E. D.
2013-02-01
Annual recruitment levels of age-0 juvenile Atlantic menhaden to Chesapeake Bay, which historically supported >65% of coastwide recruitment, have been consistently low since the 1980s. Diminished larval supply to the Bay is one hypothesized explanation. In a three-year ichthyoplankton survey at the Chesapeake Bay mouth, abundance of ingressing larvae varied nine-fold among years. Larvae were most abundant in 2007-2008 and less abundant in 2005-2006 and 2006-2007. High month-to-month variability in larval concentrations was attributable primarily to seasonality of occurrences. There was no defined spatial pattern in distribution of larvae across the 18-km-wide Bay mouth, but larvae at the south side were longer and older on average than larvae at the middle and north side. Environmental variables measured at the times of larval collections were not correlated consistently with temporal and spatial variability in abundance of larvae at ingress, highlighting complexity and suggesting that abundance may be controlled by processes occurring offshore during the pre-ingress phase. Moreover, the substantial differences in inter-annual abundances of larvae at the Bay mouth were not concordant with subsequent abundances of age-0 juveniles in the three survey years, indicating that important processes affecting recruitment of Atlantic menhaden operate after ingress, during the larval to juvenile transition stage.
Peterson, M A; Gibson, B S
1994-11-01
In previous research, replicated here, we found that some object recognition processes influence figure-ground organization. We have proposed that these object recognition processes operate on edges (or contours) detected early in visual processing, rather than on regions. Consistent with this proposal, influences from object recognition on figure-ground organization were previously observed in both pictures and stereograms depicting regions of different luminance, but not in random-dot stereograms, where edges arise late in processing (Peterson & Gibson, 1993). In the present experiments, we examined whether or not two other types of contours--outlines and subjective contours--enable object recognition influences on figure-ground organization. For both types of contours we observed a pattern of effects similar to that originally obtained with luminance edges. The results of these experiments are valuable for distinguishing between alternative views of the mechanisms mediating object recognition influences on figure-ground organization. In addition, in both Experiments 1 and 2, fixated regions were seen as figure longer than nonfixated regions, suggesting that fixation location must be included among the variables relevant to figure-ground organization.
NASA Astrophysics Data System (ADS)
Imbrogno, Stano; Rinaldi, Sergio; Raso, Antonio; Bordin, Alberto; Bruschi, Stefania; Umbrello, Domenico
2018-05-01
The Additive Manufacturing techniques are gaining more and more interest in various industrial fields due to the possibility of drastically reduce the material waste during the production processes, revolutionizing the standard scheme and strategies of the manufacturing processes. However, the metal parts shape produced, frequently do not satisfy the tolerances as well as the surface quality requirements. During the design phase, the finite element simulation results a fundamental tool to help the engineers in the correct decision of the most suitable process parameters, especially in manufacturing processes, in order to produce products of high quality. The aim of this work is to develop a 3D finite element model of semi-finishing turning operation of Ti6Al4V, produced via Direct Metal Laser Sintering (DMLS). A customized user sub-routine was built-up in order to model the mechanical behavior of the material under machining operations to predict the main fundamental variables as cutting forces and temperature. Moreover, the machining induced alterations are also studied by the finite element model developed.
Automated segmentation of three-dimensional MR brain images
NASA Astrophysics Data System (ADS)
Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee
2006-03-01
Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.
López, Alejandro; Coll, Andrea; Lescano, Maia; Zalazar, Cristina
2017-05-05
In this work, the suitability of the UV/H 2 O 2 process for commercial herbicides mixture degradation was studied. Glyphosate, the herbicide most widely used in the world, was mixed with other herbicides that have residual activity as 2,4-D and atrazine. Modeling of the process response related to specific operating conditions like initial pH and initial H 2 O 2 to total organic carbon molar ratio was assessed by the response surface methodology (RSM). Results have shown that second-order polynomial regression model could well describe and predict the system behavior within the tested experimental region. It also correctly explained the variability in the experimental data. Experimental values were in good agreement with the modeled ones confirming the significance of the model and highlighting the success of RSM for UV/H 2 O 2 process modeling. Phytotoxicity evolution throughout the photolytic degradation process was checked through germination tests indicating that the phytotoxicity of the herbicides mixture was significantly reduced after the treatment. The end point for the treatment at the operating conditions for maximum TOC conversion was also identified.
Multidimensional Profiling of Task Stress States for Human Factors: A Brief Review.
Matthews, Gerald
2016-09-01
This article advocates multidimensional assessment of task stress in human factors and reviews the use of the Dundee Stress State Questionnaire (DSSQ) for evaluation of systems and operators. Contemporary stress research has progressed from an exclusive focus on environmental stressors to transactional perspectives on the stress process. Performance impacts of stress reflect the operator's dynamic attempts to understand and cope with task demands. Multidimensional stress assessments are necessary to gauge the different forms of system-operator interaction. This review discusses the theoretical and practical use of the DSSQ in evaluating multidimensional patterns of stress response. It presents psychometric evidence for the multidimensional perspective and illustrative profiles of subjective state response to task stressors and environments. Evidence is also presented on stress state correlations with related variables, including personality, stress process measures, psychophysiological response, and objective task performance. Evidence supports the validity of the DSSQ as a task stress measure. Studies of various simulated environments show that different tasks elicit different profiles of stress state response. Operator characteristics such as resilience predict individual differences in state response to stressors. Structural equation modeling may be used to understand performance impacts of stress states. Multidimensional assessment affords insight into the stress process in a variety of human factors contexts. Integrating subjective and psychophysiological assessment is a priority for future research. Stress state measurement contributes to evaluating system design, countermeasures to stress and fatigue, and performance vulnerabilities. It may also support personnel selection and diagnostic monitoring of operators. © 2016, Human Factors and Ergonomics Society.
SUBTASK 2.19 – OPERATIONAL FLEXIBILITY OF CO2 TRANSPORT AND STORAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Melanie; Schlasner, Steven; Sorensen, James
2014-12-31
Carbon dioxide (CO2) is produced in large quantities during electricity generation and by industrial processes. These CO2 streams vary in terms of both composition and mass flow rate, sometimes substantially. The impact of a varying CO2 stream on pipeline and storage operation is not fully understood in terms of either operability or infrastructure robustness. This study was performed to summarize basic background from the literature on the topic of operational flexibility of CO2 transport and storage, but the primary focus was on compiling real-world lessons learned about flexible operation of CO2 pipelines and storage from both large-scale field demonstrations andmore » commercial operating experience. Modeling and pilot-scale results of research in this area were included to illustrate some of the questions that exist relative to operation of carbon capture and storage (CCS) projects with variable CO2 streams. It is hoped that this report’s real-world findings provide readers with useful information on the topic of transport and storage of variable CO2 streams. The real-world results were obtained from two sources. The first source consisted of five full-scale, commercial transport–storage projects: Sleipner, Snøhvit, In Salah, Weyburn, and Illinois Basin–Decatur. These scenarios were reviewed to determine the information that is available about CO2 stream variability/intermittency on these demonstration-scale projects. The five projects all experienced mass flow variability or an interruption in flow. In each case, pipeline and/or injection engineers were able to accommodate any issues that arose. Significant variability in composition has not been an issue at these five sites. The second source of real- world results was telephone interviews conducted with experts in CO2 pipeline transport, injection, and storage during which commercial anecdotal information was acquired to augment that found during the literature search of the five full-scale projects. The experts represented a range of disciplines and hailed from North America and Europe. Major findings of the study are that compression and transport of CO2 for enhanced oil recovery (EOR) purposes in the United States has shown that impurities are not likely to cause transport problems if CO2 stream composition standards are maintained and pressures are kept at 10.3 MPa or higher. Cyclic, or otherwise intermittent, CO2 supplies historically have not impacted in-field distribution pipeline networks, wellbore integrity, or reservoir conditions. The U.S. EOR industry has demonstrated that it is possible to adapt to variability and intermittency in CO2 supply through flexible operation of the pipeline and geologic storage facility. This CO2 transport and injection experience represents knowledge that can be applied in future CCS projects. A number of gaps in knowledge were identified that may benefit from future research and development, further enhancing the possibility for widespread application of CCS. This project was funded through the Energy & Environmental Research Center–U.S. Department of Energy Joint Program on Research and Development for Fossil Energy-Related Resources Cooperative Agreement No. DE-FC26-08NT43291. Nonfederal funding was provided by the IEA Greenhouse Gas R&D Programme.« less
NASA Astrophysics Data System (ADS)
Wi, S.; Freeman, S.; Brown, C.
2017-12-01
This study presents a general approach to developing computational models of human-hydrologic systems where human modification of hydrologic surface processes are significant or dominant. A river basin system is represented by a network of human-hydrologic response units (HHRUs) identified based on locations where river regulations happen (e.g., reservoir operation and diversions). Natural and human processes in HHRUs are simulated in a holistic framework that integrates component models representing rainfall-runoff, river routing, reservoir operation, flow diversion and water use processes. We illustrate the approach in a case study of the Cutzamala water system (CWS) in Mexico, a complex inter-basin water transfer system supplying the Mexico City Metropolitan Area (MCMA). The human-hydrologic system model for CWS (CUTZSIM) is evaluated in terms of streamflow and reservoir storages measured across the CWS and to water supplied for MCMA. The CUTZSIM improves the representation of hydrology and river-operation interaction and, in so doing, advances evaluation of system-wide water management consequences under altered climatic and demand regimes. The integrated modeling framework enables evaluation and simulation of model errors throughout the river basin, including errors in representation of the human component processes. Heretofore, model error evaluation, predictive error intervals and the resultant improved understanding have been limited to hydrologic processes. The general framework represents an initial step towards fuller understanding and prediction of the many and varied processes that determine the hydrologic fluxes and state variables in real river basins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik
2015-10-19
The recent increased interest in utilizing variable generation (VG) resources such as wind and solar in power systems has motivated investigations into new operating procedures. Although these resources provide desirable value to a system (e.g., no fuel costs or emissions), interconnecting them provides unique challenges. Their variable, non-controllable nature in particular requires significant attention, because it directly results in increased power system variability and uncertainty. One way to handle this is via new operating reserve schemes. Operating reserves provide upward and downward generation and ramping capacity to counteract uncertainty and variability prior to their realization. For instance, uncertainty and variabilitymore » in real-time dispatch can be accounted for in the hour-ahead unit commitment. New operating reserve methodologies that specifically account for the increased variability and uncertainty caused by VG are currently being investigated and developed by academia and industry. This paper examines one method inspired by the new operating reserve product being proposed by the California Independent System Operator. The method is based on examining the potential ramping requirements at any given time and enforcing those requirements via a reserve demand curve in the market-clearing optimization as an additional ancillary service product.« less
A Practice-Oriented Bifurcation Analysis for Pulse Energy Converters. Part 2: An Operating Regime
NASA Astrophysics Data System (ADS)
Kolokolov, Yury; Monovskaya, Anna
The paper continues the discussion on bifurcation analysis for applications in practice-oriented solutions for pulse energy conversion systems (PEC-systems). Since a PEC-system represents a nonlinear object with a variable structure, then the description of its dynamics evolution involves bifurcation analysis conceptions. This means the necessity to resolve the conflict-of-units between the notions used to describe natural evolution (i.e. evolution of the operating process towards nonoperating processes and vice versa) and the notions used to describe a desirable artificial regime (i.e. an operating regime). We consider cause-effect relations in the following sequence: nonlinear dynamics-output signal-operating characteristics, where these characteristics include stability and performance. Then regularities of nonlinear dynamics should be translated into regularities of the output signal dynamics, and, after, into an evolutional picture of each operating characteristic. In order to make the translation without losses, we first take into account heterogeneous properties within the structures of the operating process in the parametrical (P-) and phase (X-) spaces, and analyze regularities of the operating stability and performance on the common basis by use of the modified bifurcation diagrams built in joint PX-space. Then, the correspondence between causes (degradation of the operating process stability) and effects (changes of the operating characteristics) is decomposed into three groups of abnormalities: conditionally unavoidable abnormalities (CU-abnormalities); conditionally probable abnormalities (CP-abnormalities); conditionally regular abnormalities (CR-abnormalities). Within each of these groups the evolutional homogeneity is retained. After, the resultant evolution of each operating characteristic is naturally aggregated through the superposition of cause-effect relations in accordance with each of the abnormalities. We demonstrate that the practice-oriented bifurcation analysis has fundamentally specific purposes and tools, like for the computer-based bifurcation analysis and the experimental bifurcation analysis. That is why, from our viewpoint, it seems to be a rather novel direction in the general context of bifurcation analysis conceptions. We believe that the discussion could be interesting to pioneer research intended for the design of promising systems of pulse energy conversion.
Trück, Johannes; Mitchell, Ruth; Thompson, Amber J; Morales-Aza, Begonia; Clutterbuck, Elizabeth A; Kelly, Dominic F; Finn, Adam; Pollard, Andrew J
2014-01-01
The ELISpot assay is used in vaccine studies for the quantification of antigen-specific memory B cells (B(MEM)), and can be performed using cryopreserved samples. The effects of cryopreservation on B(MEM) detection and the consistency of cultured ELISpot assays when performed by different operators or laboratories are unknown. In this study, blood was taken from healthy volunteers, and a cultured ELISpot assay was used to count B(MEM) specific for 2 routine vaccine antigens (diphtheria and tetanus toxoid). Results were assessed for intra- and inter-operator variation, and the effects of cryopreservation. Cryopreserved samples were shipped to a second laboratory in order to assess inter-laboratory variation. B(MEM) frequencies were very strongly correlated when comparing fresh and frozen samples processed by the same operator, and were also very strongly correlated when comparing 2 operators in the same laboratory. Results were slightly less consistent when samples were processed in different laboratories but correlation between the 2 measurements was still very strong. Although cell viability was reduced in some cryopreserved samples due to higher temperatures during transportation, B(MEM) could still be quantified. These results demonstrate the reproducibility of the ELISpot assay across operators and laboratories, and support the use of cryopreserved samples in future B(MEM) studies.
Ausserhofer, Dietmar; Rakic, Severin; Novo, Ahmed; Dropic, Emira; Fisekovic, Eldin; Sredic, Ana; Van Malderen, Greet
2016-06-01
We explored how selected 'positive deviant' healthcare facilities in Bosnia and Herzegovina approach the continuous development, adaptation, implementation, monitoring and evaluation of nursing-related standard operating procedures. Standardized nursing care is internationally recognized as a critical element of safe, high-quality health care; yet very little research has examined one of its key instruments: nursing-related standard operating procedures. Despite variability in Bosnia and Herzegovina's healthcare and nursing care quality, we assumed that some healthcare facilities would have developed effective strategies to elevate nursing quality and safety through the use of standard operating procedures. Guided by the 'positive deviance' approach, we used a multiple-case study design to examine a criterion sample of four facilities (two primary healthcare centres and two hospitals), collecting data via focus groups and individual interviews. In each studied facility, certification/accreditation processes were crucial to the initiation of continuous development, adaptation, implementation, monitoring and evaluation of nursing-related SOPs. In one hospital and one primary healthcare centre, nurses working in advanced roles (i.e. quality coordinators) were responsible for developing and implementing nursing-related standard operating procedures. Across the four studied institutions, we identified a consistent approach to standard operating procedures-related processes. The certification/accreditation process is enabling necessary changes in institutions' organizational cultures, empowering nurses to take on advanced roles in improving the safety and quality of nursing care. Standardizing nursing procedures is key to improve the safety and quality of nursing care. Nursing and Health Policy are needed in Bosnia and Herzegovina to establish a functioning institutional framework, including regulatory bodies, educational systems for developing nurses' capacities or the inclusion of nursing-related standard operating procedures in certification/accreditation standards. © 2016 International Council of Nurses.
Combining Variables, Controlling Variables, and Proportions: Is There a Psychological Link?
ERIC Educational Resources Information Center
Lawson, Anton E.
1979-01-01
Investigated the degree of relationship among the performance of 28 seventh grade students on the following three formal operations tasks: chemical combinations, bending rods, and balance beam. Results show that task performance ranged widely from early concrete operational to fully operational. (HM)
High-Volume Production of Lightweight Multijunction Solar Cells
NASA Technical Reports Server (NTRS)
Youtsey, Christopher
2015-01-01
MicroLink Devices, Inc., has transitioned its 6-inch epitaxial lift-off (ELO) solar cell fabrication process into a manufacturing platform capable of sustaining large-volume production. This Phase II project improves the ELO process by reducing cycle time and increasing the yield of large-area devices. In addition, all critical device fabrication processes have transitioned to 6-inch production tool sets designed for volume production. An emphasis on automated cassette-to-cassette and batch processes minimizes operator dependence and cell performance variability. MicroLink Devices established a pilot production line capable of at least 1,500 6-inch wafers per month at greater than 80 percent yield. The company also increased the yield and manufacturability of the 6-inch reclaim process, which is crucial to reducing the cost of the cells.
Zare, Mohsen; Sagot, Jean-Claude; Roquelaure, Yves
2018-05-17
Industrial companies indicate a tendency to eliminate variations in operator strategies, particularly following implementation of the lean principle. Companies believe when the operators perform the same prescribed tasks, they have to execute them in the same manner (completing the same gestures and being exposed to the same risk factors). They attempt to achieve better product quality by standardizing and reducing operational leeway. However, operators adjust and modify ways of performing tasks to balance between their abilities and the requirements of the job. This study aims to investigate the variability of exposure to physical risk factors within and between operators when executing the same prescribed tasks. The Ergonomic Standard method was used to evaluate two workstations. Seven operators were observed thirty times between repeated cycle times at those workstations. The results revealed the variability of exposure to risk factors between and within operators in the repeated execution of the same tasks. Individual characteristics and operators' strategies might generate the variability of exposure to risk factors that may be an opportunity to reduce the risks of work-related musculoskeletal disorders (WR-MSDs). However, sometimes operators' strategies may cause overexposure to risk factors; operators most often adopt such strategies to undertake their tasks while reducing the workload.
NASA Astrophysics Data System (ADS)
Yao, Yuchen; Bao, Jie; Skyllas-Kazacos, Maria; Welch, Barry J.; Akhmetov, Sergey
2018-04-01
Individual anode current signals in aluminum reduction cells provide localized cell conditions in the vicinity of each anode, which contain more information than the conventionally measured cell voltage and line current. One common use of this measurement is to identify process faults that can cause significant changes in the anode current signals. While this method is simple and direct, it ignores the interactions between anode currents and other important process variables. This paper presents an approach that applies multivariate statistical analysis techniques to individual anode currents and other process operating data, for the detection and diagnosis of local process abnormalities in aluminum reduction cells. Specifically, since the Hall-Héroult process is time-varying with its process variables dynamically and nonlinearly correlated, dynamic kernel principal component analysis with moving windows is used. The cell is discretized into a number of subsystems, with each subsystem representing one anode and cell conditions in its vicinity. The fault associated with each subsystem is identified based on multivariate statistical control charts. The results show that the proposed approach is able to not only effectively pinpoint the problematic areas in the cell, but also assess the effect of the fault on different parts of the cell.
Transient thermal analysis for radioactive liquid mixing operations in a large-scaled tank
Lee, S. Y.; Smith, III, F. G.
2014-07-25
A transient heat balance model was developed to assess the impact of a Submersible Mixer Pump (SMP) on radioactive liquid temperature during the process of waste mixing and removal for the high-level radioactive materials stored in Savannah River Site (SRS) tanks. The model results will be mainly used to determine the SMP design impacts on the waste tank temperature during operations and to develop a specification for a new SMP design to replace existing longshaft mixer pumps used during waste removal. The present model was benchmarked against the test data obtained by the tank measurement to examine the quantitative thermalmore » response of the tank and to establish the reference conditions of the operating variables under no SMP operation. The results showed that the model predictions agreed with the test data of the waste temperatures within about 10%.« less
Basic principles of variable speed drives
NASA Technical Reports Server (NTRS)
Loewenthal, S. H.
1973-01-01
An understanding of the principles which govern variable speed drive operation is discussed for successful drive application. The fundamental factors of torque, speed ratio, and power as they relate to drive selection are discussed. The basic types of variable speed drives, their operating characteristics and their applications are also presented.
Using near infrared spectroscopy and heart rate variability to detect mental overload.
Durantin, G; Gagnon, J-F; Tremblay, S; Dehais, F
2014-02-01
Mental workload is a key factor influencing the occurrence of human error, especially during piloting and remotely operated vehicle (ROV) operations, where safety depends on the ability of pilots to act appropriately. In particular, excessively high or low mental workload can lead operators to neglect critical information. The objective of the present study is to investigate the potential of functional near infrared spectroscopy (fNIRS) - a non-invasive method of measuring prefrontal cortex activity - in combination with measurements of heart rate variability (HRV), to predict mental workload during a simulated piloting task, with particular regard to task engagement and disengagement. Twelve volunteers performed a computer-based piloting task in which they were asked to follow a dynamic target with their aircraft, a task designed to replicate key cognitive demands associated with real life ROV operating tasks. In order to cover a wide range of mental workload levels, task difficulty was manipulated in terms of processing load and difficulty of control - two critical sources of workload associated with piloting and remotely operating a vehicle. Results show that both fNIRS and HRV are sensitive to different levels of mental workload; notably, lower prefrontal activation as well as a lower LF/HF ratio at the highest level of difficulty, suggest that these measures are suitable for mental overload detection. Moreover, these latter measurements point toward the existence of a quadratic model of mental workload. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yamashita, Takashi; Nakano, Daisuke; Mori, Masayuki; Maezawa, Koichi
2018-04-01
A resonant tunneling diode oscillator having a wide frequency variation range based on a novel MEMS resonator was proposed, which exploits the change in the signal propagation velocity on a coplanar waveguide according to a movable ground plane. First, we discussed the velocity modulation mechanism, and clarified the importance of the dielectric constant of the substrate. Then, a prototype device oscillating in a 10 to 20 GHz frequency range was fabricated to demonstrate the basic operation. A large and continuous increase in the oscillation frequency of about two times was achieved with this device. This is promising for various applications including THz spectroscopy.
JIGSAW: Preference-directed, co-operative scheduling
NASA Technical Reports Server (NTRS)
Linden, Theodore A.; Gaw, David
1992-01-01
Techniques that enable humans and machines to cooperate in the solution of complex scheduling problems have evolved out of work on the daily allocation and scheduling of Tactical Air Force resources. A generalized, formal model of these applied techniques is being developed. It is called JIGSAW by analogy with the multi-agent, constructive process used when solving jigsaw puzzles. JIGSAW begins from this analogy and extends it by propagating local preferences into global statistics that dynamically influence the value and variable ordering decisions. The statistical projections also apply to abstract resources and time periods--allowing more opportunities to find a successful variable ordering by reserving abstract resources and deferring the choice of a specific resource or time period.
Real options valuation and optimization of energy assets
NASA Astrophysics Data System (ADS)
Thompson, Matthew
In this thesis we present algorithms for the valuation and optimal operation of natural gas storage facilities, hydro-electric power plants and thermal power generators in competitive markets. Real options theory is used to derive nonlinear partial-integro-differential equations (PIDEs) for the valuation and optimal operating strategies of all types of facilities. The equations are designed to incorporate a wide class of spot price models that can exhibit the same time-dependent, mean-reverting dynamics and price spikes as those observed in most energy markets. Particular attention is paid to the operational characteristics of real energy assets. For natural gas storage facilities these characteristics include: working gas capacities, variable deliverability and injection rates and cycling limitations. For thermal power plants relevant operational characteristics include variable start-up times and costs, control response time lags, minimum generating levels, nonlinear output functions, structural limitations on ramp rates, and minimum up/down time restrictions. For hydro-electric units, head effects and environmental constraints are addressed. We illustrate the models with numerical examples of a gas storage facility, a hydro-electric pump storage facility and a thermal power plant. This PIDE framework is the first in the literature to achieve second order accuracy in characterizing the operating states of hydro-electric and hydro-thermal power plants. The continuous state space representation derived in this thesis can therefore achieve far greater realism in terms of operating state specification than any other method in the literature to date. This thesis is also the first and only to allow for any continuous time jump diffusion processes in order to account for price spikes.
Kraus, T W; Weber, W; Mieth, M; Funk, H; Klar, E; Herfarth, C
2000-03-01
Surgical hospitals can be seen as operational or even industrial production systems. Doctors have a major impact on both medical performance and costs. For active participation in the management process, knowledge of industrial controlling mechanisms is required. German hospitals currently receive no procedure-related financial revenues, such as prices or tariffs for defined medical treatment activities. Maximum clinical revenues are, furthermore, limited by principles of planned economy and can be increased only slightly by greater medical performance. Costs are the only target that can be autonomously influenced by the management. Operative controlling in hospitals aims at horizontal and vertical coordination of subunits and decentralization of process regulations. Hospital medical performance is not clearly defined, its quantitative measurement very problematic. Process-orientated clinical activities are not taken into account. A high percentage of hospital costs are fixed and can be influenced only by major structural interventions in the long term. Variable costs are primarily dependent on the quantity of clinical activities, but also heavily influenced by patient structure (comorbidity and risk profile). The various forms of industrial cost calculations, such as internal budgeting, internal markets or flexible plan-cost balancing, cannot be directly applied in hospital management. Based on these analyses, current operational concepts and strategic trends are listed to describe cost-management options in hospitals with focus on the German health reforms.
The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth system models
NASA Astrophysics Data System (ADS)
Eichinger, R.; Jöckel, P.
2014-07-01
The tendencies of prognostic variables in Earth system models are usually only accessible, e.g. for output, as a sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover, a standard interface allows the access to the individual process tendencies by other submodels, e.g. for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the model's susceptibility. TENDENCY is independent of the time integration scheme and therefore the concept is applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large-scale clouds, convective clouds, large-scale advection, vertical diffusion and methane oxidation) show that the upward propagating water vapour signal dissolves mainly because of the chemical and the advective contribution. The TENDENCY submodel is part of version 2.42 or later of MESSy.
The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth System Models
NASA Astrophysics Data System (ADS)
Eichinger, R.; Jöckel, P.
2014-04-01
The tendencies of prognostic variables in Earth System Models are usually only accessible, e.g., for output, as sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover a standard interface allows the access to the individual process tendencies by other submodels, e.g., for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the models susceptibility. TENDENCY is independent of the time integration scheme and therefore applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large-scale clouds, convective clouds, large-scale advection, vertical diffusion and methane-oxidation) show that the upward propagating water vapour signal dissolves mainly because of the chemical and the advective contribution. The TENDENCY submodel is part of version 2.42 or later of MESSy.
Thermoelectric power generator for variable thermal power source
Bell, Lon E; Crane, Douglas Todd
2015-04-14
Traditional power generation systems using thermoelectric power generators are designed to operate most efficiently for a single operating condition. The present invention provides a power generation system in which the characteristics of the thermoelectrics, the flow of the thermal power, and the operational characteristics of the power generator are monitored and controlled such that higher operation efficiencies and/or higher output powers can be maintained with variably thermal power input. Such a system is particularly beneficial in variable thermal power source systems, such as recovering power from the waste heat generated in the exhaust of combustion engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jong Suk; Chen, Jun; Garcia, Humberto E.
An RO (reverse osmosis) desalination plant is proposed as an effective, FLR (flexible load resource) to be integrated into HES (hybrid energy systems) to support various types of ancillary services to the electric grid, under variable operating conditions. To study the dynamic (transient) analysis of such system, among the various unit operations within HES, special attention is given here to the detailed dynamic modeling and control design of RO desalination process with a spiral-wound membrane module. The model incorporates key physical phenomena that have been investigated individually into a dynamic integrated model framework. In particular, the solution-diffusion model modified withmore » the concentration polarization theory is applied to predict RO performance over a large range of operating conditions. Simulation results involving several case studies suggest that an RO desalination plant, acting as a FLR, can provide operational flexibility to participate in energy management at the utility scale by dynamically optimizing the use of excess electrical energy. Here, the incorporation of additional commodity (fresh water) produced from a FLR allows a broader range of HES operations for maximizing overall system performance and profitability. For the purpose of assessing the incorporation of health assessment into process operations, an online condition monitoring approach for RO membrane fouling supervision is addressed in the case study presented.« less
Kim, Jong Suk; Chen, Jun; Garcia, Humberto E.
2016-06-17
An RO (reverse osmosis) desalination plant is proposed as an effective, FLR (flexible load resource) to be integrated into HES (hybrid energy systems) to support various types of ancillary services to the electric grid, under variable operating conditions. To study the dynamic (transient) analysis of such system, among the various unit operations within HES, special attention is given here to the detailed dynamic modeling and control design of RO desalination process with a spiral-wound membrane module. The model incorporates key physical phenomena that have been investigated individually into a dynamic integrated model framework. In particular, the solution-diffusion model modified withmore » the concentration polarization theory is applied to predict RO performance over a large range of operating conditions. Simulation results involving several case studies suggest that an RO desalination plant, acting as a FLR, can provide operational flexibility to participate in energy management at the utility scale by dynamically optimizing the use of excess electrical energy. Here, the incorporation of additional commodity (fresh water) produced from a FLR allows a broader range of HES operations for maximizing overall system performance and profitability. For the purpose of assessing the incorporation of health assessment into process operations, an online condition monitoring approach for RO membrane fouling supervision is addressed in the case study presented.« less
NASA Astrophysics Data System (ADS)
Tisseyre, Bruno
2015-04-01
For more than 15 years, research projects are conducted in the precision viticulture (PV) area around the world. These research projects have provided new insights into the within-field variability in viticulture. Indeed, access to high spatial resolution data (remote sensing, embedded sensors, etc.) changes the knowledge we have of the fields in viticulture. In particular, the field which was until now considered as a homogeneous management unit, presents actually a high spatial variability in terms of yield, vigour an quality. This knowledge will lead (and is already causing) changes on how to manage the vineyard and the quality of the harvest at the within field scale. From the experimental results obtained in various countries of the world, the goal of the presentation is to provide figures on: - the spatial variability of the main parameters (yield, vigor, quality), and how this variability is organized spatially, - the temporal stability of the observed spatial variability and the potential link with environmental parameters like soil, topography, soil water availability, etc. - information sources available at a high spatial resolution conventionally used in precision agriculture likely to highlight this spatial variability (multi-spectral images, soil electrical conductivity, etc.) and the limitations that these information sources are likely to present in viticulture. Several strategies are currently being developed to take into account the within field variability in viticulture. They are based on the development of specific equipments, sensors, actuators and site specific strategies with the aim of adapting the vineyard operations at the within-field level. These strategies will be presented briefly in two ways : - Site specific operations (fertilization, pruning, thinning, irrigation, etc.) in order to counteract the effects of the environment and to obtain a final product with a controlled and consistent wine quality, - Differential harvesting with the objective to take advantage of the observed spatial variability to produce different quality of wines. These later approach tends to produce very different quality wines which will be blended to control the final quality and/or marketed differently. These applications show that the environment and its spatial variability can be valued with the goal of controlling the final quality of the wine produced. Technologies to characterize the spatial variability of vine fields are currently in rapid evolution. They will significantly impact production methods and management strategies of the vineyard. In its last part, the presentation will summarize the technologies likely to impact the knowledge and the vineyard management either at the field level, at the vineyard level or at the regional level. A brief overview of the needs in terms of information processing will be also performed. A reflection on the difficulties that might limit the adoption of precision viticulture technologies (PV) will be done. Indeed, although very informative, PV entails high costs of information acquisition and data processing. Cost is one of the major obstacles to the dissemination of these tools and services to the majority of wine producers. In this context, the pooling of investments is a choke point to make the VP accessible to the highest number of growers. Thus, to be adopted, the VP will necessarily satisfy the operational requirements at the field level, but also throughout the whole production area (at the regional level). This working scale raises new scientific questions to be addressed.
The research on visual industrial robot which adopts fuzzy PID control algorithm
NASA Astrophysics Data System (ADS)
Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye
2017-03-01
The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.
Electrocoagulation of wastewater from almond industry.
Valero, David; Ortiz, Juan M; García, Vicente; Expósito, Eduardo; Montiel, Vicente; Aldaz, Antonio
2011-08-01
This work was carried out to study the treatment of almond industry wastewater by the electrocoagulation process. First of all, laboratory scale experiments were conducted in order to determine the effects of relevant wastewater characteristics such as conductivity and pH, as well as the process variables such as anode material, current density and operating time on the removal efficiencies of the total organic carbon (TOC) and the most representative analytical parameters. Next, the wastewater treatment process was scaled up to pre-industrial size using the best experimental conditions and parameters obtained at laboratory scale. Finally, economic parameters such as chemicals, energy consumption and sludge generation have been discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Calibration of Lévy Processes with American Options
NASA Astrophysics Data System (ADS)
Achdou, Yves
We study options on financial assets whose discounted prices are exponential of Lévy processes. The price of an American vanilla option as a function of the maturity and the strike satisfies a linear complementarity problem involving a non-local partial integro-differential operator. It leads to a variational inequality in a suitable weighted Sobolev space. Calibrating the Lévy process may be done by solving an inverse least square problem where the state variable satisfies the previously mentioned variational inequality. We first assume that the volatility is positive: after carefully studying the direct problem, we propose necessary optimality conditions for the least square inverse problem. We also consider the direct problem when the volatility is zero.
Early warning of changing drinking water quality by trend analysis.
Tomperi, Jani; Juuso, Esko; Leiviskä, Kauko
2016-06-01
Monitoring and control of water treatment plants play an essential role in ensuring high quality drinking water and avoiding health-related problems or economic losses. The most common quality variables, which can be used also for assessing the efficiency of the water treatment process, are turbidity and residual levels of coagulation and disinfection chemicals. In the present study, the trend indices are developed from scaled measurements to detect warning signs of changes in the quality variables of drinking water and some operating condition variables that strongly affect water quality. The scaling is based on monotonically increasing nonlinear functions, which are generated with generalized norms and moments. Triangular episodes are classified with the trend index and its derivative. Deviation indices are used to assess the severity of situations. The study shows the potential of the described trend analysis as a predictive monitoring tool, as it provides an advantage over the traditional manual inspection of variables by detecting changes in water quality and giving early warnings.
NASA Astrophysics Data System (ADS)
Changjiang, Xu; Dongdong, Zhang
2018-06-01
As the impacts by climate changes and human activities are intensified, variability may occur in river's annual runoff as well as flood and low water characteristics. In order to understand the characteristics of variability in hydrological series, diagnosis and identification must be conducted specific to the variability of hydrological series, i.e., whether there was variability and where the variability began to occur. In this paper, the mainstream of Yangtze River was taken as the object of study. A model was established to simulate the impounding and operation of upstream cascade reservoirs so as to obtain the runoff of downstream hydrological control stations after the regulation by upstream reservoirs in different level years. The Range of Variability Approach was utilized to analyze the impact of the operation of upstream reservoirs on the variability of downstream. The results indicated that the overall hydrologic alterations of Yichang hydrological station in 2010 level year, 2015 level year and the forward level year were 68.4, 72.5 and 74.3 % respectively, belonging to high alteration in all three level years. The runoff series of mainstream hydrological stations presented variability in different degrees, where the runoff series of the four hydrological stations including Xiangjiaba, Gaochang and Wulong belonged to high alteration in the three level years; and the runoff series of Beibei hydrological station in 2010 level year belonged to medium alteration, and high alteration in 2015 level year and the forward level year. The study on the impact of the operation of cascade reservoirs in Upper Yangtze River on hydrological variability of the mainstream had important practical significance on the sustainable utilization of water resources, disaster prevention and mitigation, safe and efficient operation and management of water conservancy projects and stable development of the economic society.
Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach
NASA Technical Reports Server (NTRS)
Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)
2003-01-01
Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.
Alejo-Alvarez, Luz; Guzmán-Fierro, Víctor; Fernández, Katherina; Roeckel, Marlene
2016-11-01
A full-scale process for the treatment of 80 tons per day of poultry manure was designed and optimized. A total ammonia nitrogen (TAN) balance was performed at steady state, considering the stoichiometry and the kinetic data from the anaerobic digestion and the anaerobic ammonia oxidation. The equipment, reactor design, investment costs, and operational costs were considered. The volume and cost objective functions optimized the process in terms of three variables: the water recycle ratio, the protein conversion during AD, and the TAN conversion in the process. The processes were compared with and without water recycle; savings of 70% and 43% in the annual fresh water consumption and the heating costs, respectively, were achieved. The optimal process complies with the Chilean environmental legislation limit of 0.05 g total nitrogen/L.
Bagante, Fabio; Spolverato, Gaya; Cucchetti, Alessandro; Gani, Faiz; Popescu, Irinel; Ruzzenente, Andrea; Marques, Hugo P; Aldrighetti, Luca; Gamblin, T Clark; Maithel, Shishir K; Sandroussi, Charbel; Bauer, Todd W; Shen, Feng; Poultsides, George A; Marsh, James Wallis; Guglielmi, Alfredo; Pawlik, Timothy M
2016-07-01
Regret-based decision curve analysis (DCA) is a framework that assesses the medical decision process according to physician attitudes (expected regret) relative to disease-based factors. We sought to apply this methodology to decisions around the operative management of intrahepatic cholangiocarcinoma (ICC). Utilizing a multicentric database of 799 patients who underwent liver resection for ICC, we developed a prognostic nomogram. DCA tested 3 strategies: (1) perform an operation on all patients, (2) never perform an operation, and (3) use the nomogram to select patients for an operation. Four preoperative variables were included in the nomogram: major vascular invasion (HR = 1.36), tumor number (multifocal, HR = 1.18), tumor size (>5 cm, HR = 1.45), and suspicious lymph nodes on imaging (HR = 1.47; all P < .05). The regret-DCA was assessed using an online survey of 50 physicians, expert in the treatment of ICC. For a patient with a multifocal ICC, largest lesion measuring >5 cm, one suspicious malignant lymph node, and vascular invasion on imaging, the 1-year predicted survival was 52% according to the nomogram. Based on the therapeutic decision of the regret-DCA, 60% of physicians would advise against an operation for this scenario. Conversely, all physicians recommended an operation to a patient with an early ICC (single nodule measuring 3 cm, no suspicious lymph nodes, and no vascular invasion at imaging). By integrating a nomogram based on preoperative variables and a regret-based DCA, we were able to define the elements of how decisions rely on medical knowledge (postoperative survival predicted by a nomogram, severity disease assessment) and physician attitudes (regret of commission and omission). Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Menguy, Theotime
Because of its critical nature, avionic industry is bound with numerous constraints such as security standards and certifications while having to fulfill the clients' desires for personalization. In this context, variability management is a very important issue for re-engineering projects of avionic softwares. In this thesis, we propose a new approach, based on formal concept analysis and semantic web, to support variability management. The first goal of this research is to identify characteristic behaviors and interactions of configuration variables in a dynamically configured system. To identify such elements, we used formal concept analysis on different levels of abstractions in the system and defined new metrics. Then, we built a classification for the configuration variables and their relations in order to enable a quick identification of a variable's behavior in the system. This classification could help finding a systematic approach to process variables during a re-engineering operation, depending on their category. To have a better understanding of the system, we also studied the shared controls of code between configuration variables. A second objective of this research is to build a knowledge platform to gather the results of all the analysis performed, and to store any additional element relevant in the variability management context, for instance new results helping define re-engineering process for each of the categories. To address this goal, we built a solution based on a semantic web, defining a new ontology, very extensive and enabling to build inferences related to the evolution processes. The approach presented here is, to the best of our knowledge, the first classification of configuration variables of a dynamically configured software and an original use of documentation and variability management techniques using semantic web in the aeronautic field. The analysis performed and the final results show that formal concept analysis is a way to identify specific properties and behaviors and that semantic web is a good solution to store and explore the results. However, the use of formal concept analysis with new boolean relations, such as the link between configuration variables and files, and the definition of new inferences may be a way to draw better conclusions. The use of the same methodology with other systems would enable to validate the approach in other contexts.
NASA Technical Reports Server (NTRS)
Moss, Thomas; Nurge, Mark; Perusich, Stephen
2011-01-01
The In-Situ Resource Utilization (ISRU) Regolith & Environmental Science and Oxygen & Lunar Volatiles Extraction (RESOLVE) software provides operation of the physical plant from a remote location with a high-level interface that can access and control the data from external software applications of other subsystems. This software allows autonomous control over the entire system with manual computer control of individual system/process components. It gives non-programmer operators the capability to easily modify the high-level autonomous sequencing while the software is in operation, as well as the ability to modify the low-level, file-based sequences prior to the system operation. Local automated control in a distributed system is also enabled where component control is maintained during the loss of network connectivity with the remote workstation. This innovation also minimizes network traffic. The software architecture commands and controls the latest generation of RESOLVE processes used to obtain, process, and quantify lunar regolith. The system is grouped into six sub-processes: Drill, Crush, Reactor, Lunar Water Resource Demonstration (LWRD), Regolith Volatiles Characterization (RVC) (see example), and Regolith Oxygen Extraction (ROE). Some processes are independent, some are dependent on other processes, and some are independent but run concurrently with other processes. The first goal is to analyze the volatiles emanating from lunar regolith, such as water, carbon monoxide, carbon dioxide, ammonia, hydrogen, and others. This is done by heating the soil and analyzing and capturing the volatilized product. The second goal is to produce water by reducing the soil at high temperatures with hydrogen. This is done by raising the reactor temperature in the range of 800 to 900 C, causing the reaction to progress by adding hydrogen, and then capturing the water product in a desiccant bed. The software needs to run the entire unit and all sub-processes; however, throughout testing, many variables and parameters need to be changed as more is learned about the system operation. The Master Events Controller (MEC) is run on a standard laptop PC using Windows XP. This PC runs in parallel to another laptop that monitors the GC, and a third PC that monitors the drilling/ crushing operation. These three PCs interface to the process through a CompactRIO, OPC Servers, and modems.
NASA Astrophysics Data System (ADS)
Privette, J. L.; Schaaf, C. B.; Saleous, N.; Liang, S.
2004-12-01
Shortwave broadband albedo is the fundamental surface variable that partitions solar irradiance into energy available to the land biophysical system and energy reflected back into the atmosphere. Albedo varies with land cover, vegetation phenological stage, surface wetness, solar angle, and atmospheric condition, among other variables. For these reasons, a consistent and normalized albedo time series is needed to accurately model weather, climate and ecological trends. Although an empirically-derived coarse-scale albedo from the 20-year NOAA AVHRR record (Sellers et al., 1996) is available, an operational moderate resolution global product first became available from NASA's MODIS sensor. The validated MODIS product now provides the benchmark upon which to compare albedo generated through 1) reprocessing of the historic AVHRR record and 2) operational processing of data from the future National Polar-Orbiting Environmental Satellite System's (NPOESS) Visible/Infrared Imager Radiometer Suite (VIIRS). Unfortunately, different instrument characteristics (e.g., spectral bands, spatial resolution), processing approaches (e.g., latency requirements, ancillary data availability) and even product definitions (black sky albedo, white sky albedo, actual or blue sky albedo) complicate the development of the desired multi-mission (AVHRR to MODIS to VIIRS) albedo time series -- a so-called Climate Data Record. This presentation will describe the different albedo algorithms used with AVHRR, MODIS and VIIRS, and compare their results against field measurements collected over two semi-arid sites in southern Africa. We also describe the MODIS-derived VIIRS proxy data we developed to predict NPOESS albedo characteristics. We conclude with a strategy to develop a seamless Climate Data Record from 1982- to 2020.
Development of an inpatient operational pharmacy productivity model.
Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M
2015-02-01
An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Benton, Nathanael; Burns, Patrick
Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressormore » replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.« less
Islam, Rafiqul
2013-07-01
Today's bioanalytical CROs face increasing global competition, highly variable demand, high fixed costs, pricing pressure, and increasing demand for quality and speed. Most bioanalytical laboratories have responded to these challenges by implementing automation and by implementing process improvement methodologies (e.g., Six Sigma). These solutions have not resulted in a significant improvement in productivity and profitability since none of them are able to predict the upturn or downturn in demand. High volatility of demand causes long lead times and high costs during peak demand and poor productivity during trough demand. Most bioanalytical laboratories lack the tools to align supply efficiently to meet changing demand. In this paper, sales and operation planning (S&OP) has been investigated as a tool to balance supply and demand. The S&OP process, when executed effectively, can be the single greatest determinant of profitability for a bioanalytical business.
Grassmann phase space methods for fermions. I. Mode theory
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Jeffers, J.; Barnett, S. M.
2016-07-01
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggest the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. The theory of Grassmann phase space methods for fermions based on separate modes is developed, showing how the distribution function is defined and used to determine quantum correlation functions, Fock state populations and coherences via Grassmann phase space integrals, how the Fokker-Planck equations are obtained and then converted into equivalent Ito equations for stochastic Grassmann variables. The fermion distribution function is an even Grassmann function, and is unique. The number of c-number Wiener increments involved is 2n2, if there are n modes. The situation is somewhat different to the bosonic c-number case where only 2 n Wiener increments are involved, the sign of the drift term in the Ito equation is reversed and the diffusion matrix in the Fokker-Planck equation is anti-symmetric rather than symmetric. The un-normalised B distribution is of particular importance for determining Fock state populations and coherences, and as pointed out by Plimak, Collett and Olsen, the drift vector in its Fokker-Planck equation only depends linearly on the Grassmann variables. Using this key feature we show how the Ito stochastic equations can be solved numerically for finite times in terms of c-number stochastic quantities. Averages of products of Grassmann stochastic variables at the initial time are also involved, but these are determined from the initial conditions for the quantum state. The detailed approach to the numerics is outlined, showing that (apart from standard issues in such numerics) numerical calculations for Grassmann phase space theories of fermion systems could be carried out without needing to represent Grassmann phase space variables on the computer, and only involving processes using c-numbers. We compare our approach to that of Plimak, Collett and Olsen and show that the two approaches differ. As a simple test case we apply the B distribution theory and solve the Ito stochastic equations to demonstrate coupling between degenerate Cooper pairs in a four mode fermionic system involving spin conserving interactions between the spin 1 / 2 fermions, where modes with momenta - k , + k-each associated with spin up, spin down states, are involved.
40 CFR 63.1207 - What are the performance testing requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... operating conditions that are most likely to reflect daily maximum operating variability, similar to a... operating variability, similar to a dioxin/furan compliance test; (B) You have not changed the design or... document the temperature location measurement in the comprehensive performance test plan, as required by...
NASA Astrophysics Data System (ADS)
Shen, Chien-wen
2009-01-01
During the processes of TFT-LCD manufacturing, steps like visual inspection of panel surface defects still heavily rely on manual operations. As the manual inspection time of TFT-LCD manufacturing could range from 4 hours to 1 day, the reliability of time forecasting is thus important for production planning, scheduling and customer response. This study would like to propose a practical and easy-to-implement prediction model through the approach of Bayesian networks for time estimation of manual operated procedures in TFT-LCD manufacturing. Given the lack of prior knowledge about manual operation time, algorithms of necessary path condition and expectation-maximization are used for structural learning and estimation of conditional probability distributions respectively. This study also applied Bayesian inference to evaluate the relationships between explanatory variables and manual operation time. With the empirical applications of this proposed forecasting model, approach of Bayesian networks demonstrates its practicability and prediction accountability.
Cardiopulmonary data-acquisition system
NASA Technical Reports Server (NTRS)
Crosier, W. G.; Reed, R. A.
1981-01-01
Computerized system controls and monitors bicycle and treadmill cardiovascular stress tests. It acquires and reduces stress data and displays heart rate, blood pressure, workload, respiratory rate, exhaled-gas composition, and other variables. Data are printed on hard-copy terminal every 30 seconds for quick operator response to patient. Ergometer workload is controlled in real time according to experimental protocol. Collected data are stored directly on tape in analog form and on floppy disks in digital form for later processing.
2008-12-01
manufacturing variability and thermal effects can be easi- ly compensated for electronically during operation by adjusting PZT amplitudes and phases... thermal and optical processes in the PEM bar and PZT array. An interface between COMSOL and the Trilinos solvers running in parallel on the cluster was...contaminants of low vapor pressure and/or low intrinsic fluorescence. Thermal luminescence (TL) is a technology aimed at solving the standoff
Board task performance: An exploration of micro- and macro-level determinants of board effectiveness
Minichilli, Alessandro; Zattoni, Alessandro; Nielsen, Sabina; Huse, Morten
2012-01-01
This paper addresses recent calls to narrow the micro–macro gap in management research (Bamberger, 2008), by incorporating a macro-level context variable (country) in exploring micro-level determinants of board effectiveness. Following the integrated model proposed by Forbes and Milliken (1999), we identify three board processes as micro-level determinants of board effectiveness. Specifically, we focus on effort norms, cognitive conflicts and the use of knowledge and skills as determinants of board control and advisory task performance. Further, we consider how two different institutional settings influence board tasks, and how the context moderates the relationship between processes and tasks. Our hypotheses are tested on a survey-based dataset of 535 medium-sized and large industrial firms in Italy and Norway, which are considered to substantially differ along legal and cultural dimensions. The findings show that: (i) Board processes have a larger potential than demographic variables to explain board task performance; (ii) board task performance differs significantly between boards operating in different contexts; and (iii) national context moderates the relationships between board processes and board task performance. Copyright © 2010 John Wiley & Sons, Ltd. PMID:23365485
Intelligent Performance Analysis with a Natural Language Interface
NASA Astrophysics Data System (ADS)
Juuso, Esko K.
2017-09-01
Performance improvement is taken as the primary goal in the asset management. Advanced data analysis is needed to efficiently integrate condition monitoring data into the operation and maintenance. Intelligent stress and condition indices have been developed for control and condition monitoring by combining generalized norms with efficient nonlinear scaling. These nonlinear scaling methodologies can also be used to handle performance measures used for management since management oriented indicators can be presented in the same scale as intelligent condition and stress indices. Performance indicators are responses of the process, machine or system to the stress contributions analyzed from process and condition monitoring data. Scaled values are directly used in intelligent temporal analysis to calculate fluctuations and trends. All these methodologies can be used in prognostics and fatigue prediction. The meanings of the variables are beneficial in extracting expert knowledge and representing information in natural language. The idea of dividing the problems into the variable specific meanings and the directions of interactions provides various improvements for performance monitoring and decision making. The integrated temporal analysis and uncertainty processing facilitates the efficient use of domain expertise. Measurements can be monitored with generalized statistical process control (GSPC) based on the same scaling functions.
Blazar Variability from Turbulence in Jets Launched by Magnetically Arrested Accretion Flows
NASA Astrophysics Data System (ADS)
O' Riordan, Michael; Pe'er, Asaf; McKinney, Jonathan C.
2017-07-01
Blazars show variability on timescales ranging from minutes to years, the former being comparable to and in some cases even shorter than the light-crossing time of the central black hole. The observed γ-ray light curves can be described by a power-law power density spectrum (PDS), with a similar index for both BL Lacs and flat-spectrum radio quasars. We show that this variability can be produced by turbulence in relativistic jets launched by magnetically arrested accretion flows (MADs). We perform radiative transport calculations on the turbulent, highly magnetized jet launching region of a MAD with a rapidly rotating supermassive black hole. The resulting synchrotron and synchrotron self-Compton emission, originating from close to the black hole horizon, is highly variable. This variability is characterized by PDS, which is remarkably similar to the observed power-law spectrum at frequencies less than a few per day. Furthermore, turbulence in the jet launching region naturally produces fluctuations in the plasma on scales much smaller than the horizon radius. We speculate that similar turbulent processes, operating in the jet at large radii (and therefore a high bulk Lorentz factor), are responsible for blazar variability over many decades in frequency, including on minute timescales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H., E-mail: millerwh@berkeley.edu; Cotton, Stephen J., E-mail: StephenJCotton47@gmail.com
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory—e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of themore » action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states—and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H.; Cotton, Stephen J.
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory - e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer valuesmore » of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states - and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
CRIB; the mineral resources data bank of the U.S. Geological Survey
Calkins, James Alfred; Kays, Olaf; Keefer, Eleanor K.
1973-01-01
The recently established Computerized Resources Information Bank (CRIB) of the U.S. Geological Survey is expected to play an increasingly important role in the study of United States' mineral resources. CRIB provides a rapid means for organizing and summarizing information on mineral resources and for displaying the results. CRIB consists of a set of variable-length records containing the basic information needed to characterize one or more mineral commodities, a mineral deposit, or several related deposits. The information consists of text, numeric data, and codes. Some topics covered are: name, location, commodity information, geology, production, reserves, potential resources, and references. The data are processed by the GIPSY program, which performs all the processing tasks needed to build, operate, and maintain the CRIB file. The sophisticated retrieval program allows the user to make highly selective searches of the files for words, parts of words, phrases, numeric data, word ranges, numeric ranges, and others, and to interrelate variables by logic statements to any degree of refinement desired. Three print options are available, or the retrieved data can be passed to another program for further processing.
NASA Technical Reports Server (NTRS)
Hung, R. J.; Lee, C. C.; Liu, J. W.
1990-01-01
Significant advantages of the Variable Polarity Plasma Arc (VPPA) Welding Process include faster welding, fewer repairs, less joint preparation, reduced weldment distortion, and absence of porosity. Flow profiles and power distribution of argon plasma gas as a working fluid to produce plasma arc jet in the VPPA welding process was analyzed. Major loss of heat transfer for flow through the nozzle is convective heat transfer; for the plasma jet flow between the outlet of the nozzle and workpiece is radiative heat transfer; and for the flow through the keyhole of the workpiece is convective heat transfer. The majority of the power absorbed by the keyhole of the workpiece is used for melting the solid metal workpiece into a molten metallic puddle. The crown and root widths and the crown and root heights can be predicted. An algorithm for promoting automatic control of flow parameters and the dimensions of the final product of the welding specification to be used for the VPPA Welding System operated at MSFC are provided.
Start-up and operating costs for artisan cheese companies.
Bouma, Andrea; Durham, Catherine A; Meunier-Goddik, Lisbeth
2014-01-01
Lack of valid economic data for artisan cheese making is a serious impediment to developing a realistic business plan and obtaining financing. The objective of this study was to determine approximate start-up and operating costs for an artisan cheese company. In addition, values are provided for the required size of processing and aging facilities associated with specific production volumes. Following in-depth interviews with existing artisan cheese makers, an economic model was developed to predict costs based on input variables such as production volume, production frequency, cheese types, milk types and cost, labor expenses, and financing. Estimated values for start-up cost for processing and aging facility ranged from $267,248 to $623,874 for annual production volumes of 3,402 kg (7,500 lb) and 27,216 kg (60,000 lb), respectively. First-year production costs ranged from $65,245 to $620,094 for the above-mentioned production volumes. It is likely that high start-up and operating costs remain a significant entry barrier for artisan cheese entrepreneurs. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Environmental health impact in the hospital laundry.
Byrns, G E; Bland, L A
1980-01-01
The task of surveying the hospital laundry is often dismissed by public health officials as unnecessary because the laundry cycle is generally considered to be capable of destroying all pathogens. Even though a properly operated laundry can produce a relatively bacteria free product, there are a number of variables that have an impact on the bacterial quality of the linen before it reaches the patient. It is vital that surveillance personnel understand these factors during processing, transporting, or sorting linen so that the final product is aesthetically, chemically, and bacteriologically acceptable for patient use. At the U. S. Public Health Service Hospital laundry in New Orleans, surveillance by the Environmental Health Department has identified potential problem areas. Operational improvements have been instituted at this laundry that would not have been possible without a thorough understanding of the laundry cycle. The authors describe the laundry cycle, including potential problem areas; identify useful microbial and chemical surveillance methods; and discuss process control procedures. This information will help the environmental health worker in discussions with laundry personnel regarding contamination control and operational efficiency.
Ground-level climate at a peatland wind farm in Scotland is affected by wind turbine operation
NASA Astrophysics Data System (ADS)
Armstrong, Alona; Burton, Ralph R.; Lee, Susan E.; Mobbs, Stephen; Ostle, Nicholas; Smith, Victoria; Waldron, Susan; Whitaker, Jeanette
2016-04-01
The global drive to produce low-carbon energy has resulted in an unprecedented deployment of onshore wind turbines, representing a significant land use change for wind energy generation with uncertain consequences for local climatic conditions and the regulation of ecosystem processes. Here, we present high-resolution data from a wind farm collected during operational and idle periods that shows the wind farm affected several measures of ground-level climate. Specifically, we discovered that operational wind turbines raised air temperature by 0.18 °C and absolute humidity (AH) by 0.03 g m-3 during the night, and increased the variability in air, surface and soil temperature throughout the diurnal cycle. Further, the microclimatic influence of turbines on air temperature and AH decreased logarithmically with distance from the nearest turbine. These effects on ground-level microclimate, including soil temperature, have uncertain implications for biogeochemical processes and ecosystem carbon cycling, including soil carbon stocks. Consequently, understanding needs to be improved to determine the overall carbon balance of wind energy.
Variability in hand-arm vibration during grinding operations.
Liljelind, Ingrid; Wahlström, Jens; Nilsson, Leif; Toomingas, Allan; Burström, Lage
2011-04-01
Measurements of exposure to vibrations from hand-held tools are often conducted on a single occasion. However, repeated measurements may be crucial for estimating the actual dose with good precision. In addition, knowledge of determinants of exposure could be used to improve working conditions. The aim of this study was to assess hand-arm vibration (HAV) exposure during different grinding operations, in order to obtain estimates of the variance components and to evaluate the effect of work postures. Ten experienced operators used two compressed air-driven angle grinders of the same make in a simulated work task at a workplace. One part of the study consisted of using a grinder while assuming two different working postures: at a standard work bench (low) and on a wall with arms elevated and the work area adjusted to each operator's height (high). The workers repeated the task three times. In another part of the study, investigating the wheel wear, for each grinder, the operators used two new grinding wheels and with each wheel the operator performed two consecutive 1-min grinding tasks. Both grinding tasks were conducted on weld puddles of mild steel on a piece of mild steel. Measurements were taken according to ISO-standard 5349 [the equivalent hand-arm-weighted acceleration (m s(-2)) averaged over 1 min]. Mixed- and random-effects models were used to investigate the influence of the fixed variables and to estimate variance components. The equivalent hand-arm-weighted acceleration assessed when the task was performed on the bench and at the wall was 3.2 and 3.3 m s(-2), respectively. In the mixed-effects model, work posture was not a significant variable. The variables 'operator' and 'grinder' together explained only 12% of the exposure variability and 'grinding wheel' explained 47%; the residual variability of 41% remained unexplained. When the effect of grinding wheel wear was investigated in the random-effects model, 37% of the variability was associated with the wheel while minimal variability was associated with the operator or the grinder and 37% was unexplained. The interaction effect of grinder and operator explained 18% of the variability. In the wheel wear test, the equivalent hand-arm-weighted accelerations for Grinder 1 during the first and second grinding minutes were 3.4 and 2.9 m s(-2), respectively, and for Grinder 2, they were 3.1 and 2.9 m s(-2), respectively. For Grinder 1, the equivalent hand-arm-weighted acceleration during the first grinding minute was significantly higher (P = 0.04) than during the second minute. Work posture during grinding operations does not appear to affect the level of HAV. Grinding wheels explained much of the variability in this study, but almost 40% of the variance remained unexplained. The considerable variability in the equivalent hand-arm-weighted acceleration has an impact on the risk assessment at both the group and the individual level.
Real Time Land-Surface Hydrologic Modeling Over Continental US
NASA Technical Reports Server (NTRS)
Houser, Paul R.
1998-01-01
The land surface component of the hydrological cycle is fundamental to the overall functioning of the atmospheric and climate processes. Spatially and temporally variable rainfall and available energy, combined with land surface heterogeneity cause complex variations in all processes related to surface hydrology. The characterization of the spatial and temporal variability of water and energy cycles are critical to improve our understanding of land surface-atmosphere interaction and the impact of land surface processes on climate extremes. Because the accurate knowledge of these processes and their variability is important for climate predictions, most Numerical Weather Prediction (NWP) centers have incorporated land surface schemes in their models. However, errors in the NWP forcing accumulate in the surface and energy stores, leading to incorrect surface water and energy partitioning and related processes. This has motivated the NWP to impose ad hoc corrections to the land surface states to prevent this drift. A proposed methodology is to develop Land Data Assimilation schemes (LDAS), which are uncoupled models forced with observations, and not affected by NWP forcing biases. The proposed research is being implemented as a real time operation using an existing Surface Vegetation Atmosphere Transfer Scheme (SVATS) model at a 40 km degree resolution across the United States to evaluate these critical science questions. The model will be forced with real time output from numerical prediction models, satellite data, and radar precipitation measurements. Model parameters will be derived from the existing GIS vegetation and soil coverages. The model results will be aggregated to various scales to assess water and energy balances and these will be validated with various in-situ observations.
Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A.; Cabanes, Itziar; Pombo, Iñigo
2014-01-01
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations. PMID:24854055
Operational Impacts of Operating Reserve Demand Curves on Production Cost and Reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik
The electric power industry landscape is continually evolving. As emerging technologies such as wind, solar, electric vehicles, and energy storage systems become more cost-effective and present in the system, traditional power system operating strategies will need to be reevaluated. The presence of wind and solar generation (commonly referred to as variable generation) may result in an increase in the variability and uncertainty of the net load profile. One mechanism to mitigate this is to schedule and dispatch additional operating reserves. These operating reserves aim to ensure that there is enough capacity online in the system to account for the increasedmore » variability and uncertainty occurring at finer temporal resolutions. A new operating reserve strategy, referred to as flexibility reserve, has been introduced in some regions. A similar implementation is explored in this paper, and its implications on power system operations are analyzed.« less
Operational Impacts of Operating Reserve Demand Curves on Production Cost and Reliability: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik
The electric power industry landscape is continually evolving. As emerging technologies such as wind, solar, electric vehicles, and energy storage systems become more cost-effective and present in the system, traditional power system operating strategies will need to be reevaluated. The presence of wind and solar generation (commonly referred to as variable generation) may result in an increase in the variability and uncertainty of the net load profile. One mechanism to mitigate this is to schedule and dispatch additional operating reserves. These operating reserves aim to ensure that there is enough capacity online in the system to account for the increasedmore » variability and uncertainty occurring at finer temporal resolutions. A new operating reserve strategy, referred to as flexibility reserve, has been introduced in some regions. A similar implementation is explored in this paper, and its implications on power system operations are analyzed.« less
Quantum information processing in phase space: A modular variables approach
NASA Astrophysics Data System (ADS)
Ketterer, A.; Keller, A.; Walborn, S. P.; Coudreau, T.; Milman, P.
2016-08-01
Binary quantum information can be fault-tolerantly encoded in states defined in infinite-dimensional Hilbert spaces. Such states define a computational basis, and permit a perfect equivalence between continuous and discrete universal operations. The drawback of this encoding is that the corresponding logical states are unphysical, meaning infinitely localized in phase space. We use the modular variables formalism to show that, in a number of protocols relevant for quantum information and for the realization of fundamental tests of quantum mechanics, it is possible to loosen the requirements on the logical subspace without jeopardizing their usefulness or their successful implementation. Such protocols involve measurements of appropriately chosen modular variables that permit the readout of the encoded discrete quantum information from the corresponding logical states. Finally, we demonstrate the experimental feasibility of our approach by applying it to the transverse degrees of freedom of single photons.
Artificial immune system via Euclidean Distance Minimization for anomaly detection in bearings
NASA Astrophysics Data System (ADS)
Montechiesi, L.; Cocconcelli, M.; Rubini, R.
2016-08-01
In recent years new diagnostics methodologies have emerged, with particular interest into machinery operating in non-stationary conditions. In fact continuous speed changes and variable loads make non-trivial the spectrum analysis. A variable speed means a variable characteristic fault frequency related to the damage that is no more recognizable in the spectrum. To overcome this problem the scientific community proposed different approaches listed in two main categories: model-based approaches and expert systems. In this context the paper aims to present a simple expert system derived from the mechanisms of the immune system called Euclidean Distance Minimization, and its application in a real case of bearing faults recognition. The proposed method is a simplification of the original process, adapted by the class of Artificial Immune Systems, which proved to be useful and promising in different application fields. Comparative results are provided, with a complete explanation of the algorithm and its functioning aspects.
Agustin, Alyssa E; Merrifield, Mark A; Potemra, James T; Morishige, Carey
2015-12-15
A twenty-two year record of marine debris collected on Tern Island is used to characterize the temporal variability of debris deposition at a coral atoll in the Northwestern Hawaiian Islands. Debris deposition tends to be episodic, without a significant relationship to local forcing processes associated with winds, sea level, waves, and proximity to the Subtropical Convergence Zone. The General NOAA Operational Modeling Environment is used to estimate likely debris pathways for Tern Island. The majority of modeled arrivals come from the northeast following prevailing trade winds and surface currents, with trajectories indicating the importance of the convergence zone, or garbage patch, in the North Pacific High region. Although debris deposition does not generally exhibit a significant seasonal cycle, some debris types contain considerable 3 cycle/yr variability that is coherent with wind and surface pressure over a broad region north of Tern. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Kleb, William L.
2005-01-01
A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.
Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Kleb, William L.
2005-01-01
A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.
Remote creation of hybrid entanglement between particle-like and wave-like optical qubits
NASA Astrophysics Data System (ADS)
Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien
2014-07-01
The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.
Funkenbusch, Paul D; Rotella, Mario; Ercoli, Carlo
2015-04-01
Laboratory studies of tooth preparation are often performed under a limited range of conditions involving single values for all variables other than the 1 being tested. In contrast, in clinical settings not all variables can be tightly controlled. For example, a new dental rotary cutting instrument may be tested in the laboratory by making a specific cut with a fixed force, but in clinical practice, the instrument must make different cuts with individual dentists applying a range of different forces. Therefore, the broad applicability of laboratory results to diverse clinical conditions is uncertain and the comparison of effects across studies is difficult. The purpose of this study was to examine the effect of 9 process variables on dental cutting in a single experiment, allowing each variable to be robustly tested over a range of values for the other 8 and permitting a direct comparison of the relative importance of each on the cutting process. The effects of 9 key process variables on the efficiency of a simulated dental cutting operation were measured. A fractional factorial experiment was conducted by using a computer-controlled, dedicated testing apparatus to simulate dental cutting procedures and Macor blocks as the cutting substrate. Analysis of Variance (ANOVA) was used to judge the statistical significance (α=.05). Five variables consistently produced large, statistically significant effects (target applied load, cut length, starting rpm, diamond grit size, and cut type), while 4 variables produced relatively small, statistically insignificant effects (number of cooling ports, rotary cutting instrument diameter, disposability, and water flow rate). The control exerted by the dentist, simulated in this study by targeting a specific level of applied force, was the single most important factor affecting cutting efficiency. Cutting efficiency was also significantly affected by factors simulating patient/clinical circumstances as well as hardware choices. These results highlight the importance of local clinical conditions (procedure, dentist) in understanding dental cutting procedures and in designing adequate experimental methodologies for future studies. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Volcanic Ash Data Assimilation System for Atmospheric Transport Model
NASA Astrophysics Data System (ADS)
Ishii, K.; Shimbori, T.; Sato, E.; Tokumoto, T.; Hayashi, Y.; Hashimoto, A.
2017-12-01
The Japan Meteorological Agency (JMA) has two operations for volcanic ash forecasts, which are Volcanic Ash Fall Forecast (VAFF) and Volcanic Ash Advisory (VAA). In these operations, the forecasts are calculated by atmospheric transport models including the advection process, the turbulent diffusion process, the gravitational fall process and the deposition process (wet/dry). The initial distribution of volcanic ash in the models is the most important but uncertain factor. In operations, the model of Suzuki (1983) with many empirical assumptions is adopted to the initial distribution. This adversely affects the reconstruction of actual eruption plumes.We are developing a volcanic ash data assimilation system using weather radars and meteorological satellite observation, in order to improve the initial distribution of the atmospheric transport models. Our data assimilation system is based on the three-dimensional variational data assimilation method (3D-Var). Analysis variables are ash concentration and size distribution parameters which are mutually independent. The radar observation is expected to provide three-dimensional parameters such as ash concentration and parameters of ash particle size distribution. On the other hand, the satellite observation is anticipated to provide two-dimensional parameters of ash clouds such as mass loading, top height and particle effective radius. In this study, we estimate the thickness of ash clouds using vertical wind shear of JMA numerical weather prediction, and apply for the volcanic ash data assimilation system.
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-01-01
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-02-24
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.
Unified Deep Learning Architecture for Modeling Biology Sequence.
Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang
2017-10-09
Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.
Multiple and variable speed electrical generator systems for large wind turbines
NASA Technical Reports Server (NTRS)
Andersen, T. S.; Hughes, P. S.; Kirschbaum, H. S.; Mutone, G. A.
1982-01-01
A cost effective method to achieve increased wind turbine generator energy conversion and other operational benefits through variable speed operation is presented. Earlier studies of multiple and variable speed generators in wind turbines were extended for evaluation in the context of a specific large sized conceptual design. System design and simulation have defined the costs and performance benefits which can be expected from both two speed and variable speed configurations.
Patel, Darshan C; Lyu, Yaqi Fara; Gandarilla, Jorge; Doherty, Steve
2018-04-03
In-process sampling and analysis is an important aspect of monitoring kinetic profiles and impurity formation or rejection, both in development and during commercial manufacturing. In pharmaceutical process development, the technology of choice for a substantial portion of this analysis is high-performance liquid chromatography (HPLC). Traditionally, the sample extraction and preparation for reaction characterization have been performed manually. This can be time consuming, laborious, and impractical for long processes. Depending on the complexity of the sample preparation, there can be variability introduced by different analysts, and in some cases, the integrity of the sample can be compromised during handling. While there are commercial instruments available for on-line monitoring with HPLC, they lack capabilities in many key areas. Some do not provide integration of the sampling and analysis, while others afford limited flexibility in sample preparation. The current offerings provide a limited number of unit operations available for sample processing and no option for workflow customizability. This work describes development of a microfluidic automated program (MAP) which fully automates the sample extraction, manipulation, and on-line LC analysis. The flexible system is controlled using an intuitive Microsoft Excel based user interface. The autonomous system is capable of unattended reaction monitoring that allows flexible unit operations and workflow customization to enable complex operations and on-line sample preparation. The automated system is shown to offer advantages over manual approaches in key areas while providing consistent and reproducible in-process data. Copyright © 2017 Elsevier B.V. All rights reserved.
A Semi-Preemptive Garbage Collector for Solid State Drives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Junghee; Kim, Youngjae; Shipman, Galen M
2011-01-01
NAND flash memory is a preferred storage media for various platforms ranging from embedded systems to enterprise-scale systems. Flash devices do not have any mechanical moving parts and provide low-latency access. They also require less power compared to rotating media. Unlike hard disks, flash devices use out-of-update operations and they require a garbage collection (GC) process to reclaim invalid pages to create free blocks. This GC process is a major cause of performance degradation when running concurrently with other I/O operations as internal bandwidth is consumed to reclaim these invalid pages. The invocation of the GC process is generally governedmore » by a low watermark on free blocks and other internal device metrics that different workloads meet at different intervals. This results in I/O performance that is highly dependent on workload characteristics. In this paper, we examine the GC process and propose a semi-preemptive GC scheme that can preempt on-going GC processing and service pending I/O requests in the queue. Moreover, we further enhance flash performance by pipelining internal GC operations and merge them with pending I/O requests whenever possible. Our experimental evaluation of this semi-preemptive GC sheme with realistic workloads demonstrate both improved performance and reduced performance variability. Write-dominant workloads show up to a 66.56% improvement in average response time with a 83.30% reduced variance in response time compared to the non-preemptive GC scheme.« less
Ittenbach, Richard F; Baker, Cynthia L; Corsmo, Jeremy J
2014-05-01
Standard operating procedures (SOPs) were once considered the province of the pharmaceutical industry but are now viewed as a key component of quality assurance programs. To address variability and increase the rigor of clinical data management (CDM) operations, the Cincinnati Children's Hospital Medical Center (CCHMC) decided to create CDM SOPs. In response to this challenge, and as part of a broader institutional initiative, the CCHMC leadership established an executive steering committee to oversee the development and implementation of CDM SOPs. This resulted in the creation of a quality assurance review process with three review panels: an SOP development team (16 clinical data managers and technical staff members), a faculty review panel (8 senior faculty and administrators), and an expert advisory panel (3 national CDM experts). This innovative, tiered review process helped ensure that the new SOPs would be created and implemented in accord with good CDM practices and standards. Twelve fully vetted, institutionally endorsed SOPs and one CDM template resulted from the intensive, iterative 10-month process (December 2011 to early October 2012). Phased implementation, which incoporated the CDM SOPs into the existing audit process for certain types of clinical research studies, was on schedule at the time of this writing. Once CCHMC researchers have had the opportunity to use the SOPs over time and across a broad range of research settings and conditions, the SOPs will be revisited and revalidated.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
NASA Astrophysics Data System (ADS)
Sonam; Jain, Vikrant
2018-03-01
Long profiles of rivers provide a platform to analyse interaction between geological and geomorphic processes operating at different time scales. Identification of an appropriate model for river long profile becomes important in order to establish a quantitative relationship between the profile shape, its geomorphic effectiveness, and inherent geological characteristics. This work highlights the variability in the long profile shape of the Ganga River and its major tributaries, its impact on stream power distribution pattern, and role of the geological controls on it. Long profile shapes are represented by the sum of two exponential functions through the curve fitting method. We have shown that coefficients of river long profile equations are governed by the geological characteristics of subbasins. These equations further define the spatial distribution pattern of stream power and help to understand stream power variability in different geological terrains. Spatial distribution of stream power in different geological terrains successfully explains spatial variability in geomorphic processes within the Himalayan hinterland area. In general, the stream power peaks of larger rivers lie in the Higher Himalaya, and rivers in the eastern hinterland area are characterised by the highest magnitude of stream power.
Method for assessing motor insulation on operating motors
Kueck, John D.; Otaduy, Pedro J.
1997-01-01
A method for monitoring the condition of electrical-motor-driven devices. The method is achieved by monitoring electrical variables associated with the functioning of an operating motor, applying these electrical variables to a three phase equivalent circuit and determining non-symmetrical faults in the operating motor based upon symmetrical components analysis techniques.
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
Orlandini, S; Pasquini, B; Stocchero, M; Pinzauti, S; Furlanetto, S
2014-04-25
The development of a capillary electrophoresis (CE) method for the assay of almotriptan (ALM) and its main impurities using an integrated Quality by Design and mixture-process variable (MPV) approach is described. A scouting phase was initially carried out by evaluating different CE operative modes, including the addition of pseudostationary phases and additives to the background electrolyte, in order to approach the analytical target profile. This step made it possible to select normal polarity microemulsion electrokinetic chromatography (MEEKC) as operative mode, which allowed a good selectivity to be achieved in a low analysis time. On the basis of a general Ishikawa diagram for MEEKC methods, a screening asymmetric matrix was applied in order to screen the effects of the process variables (PVs) voltage, temperature, buffer concentration and buffer pH, on critical quality attributes (CQAs), represented by critical separation values and analysis time. A response surface study was then carried out considering all the critical process parameters, including both the PVs and the mixture components (MCs) of the microemulsion (borate buffer, n-heptane as oil, sodium dodecyl sulphate/n-butanol as surfactant/cosurfactant). The values of PVs and MCs were simultaneously changed in a MPV study, making it possible to find significant interaction effects. The design space (DS) was defined as the multidimensional combination of PVs and MCs where the probability for the different considered CQAs to be acceptable was higher than a quality level π=90%. DS was identified by risk of failure maps, which were drawn on the basis of Monte-Carlo simulations, and verification points spanning the design space were tested. Robustness testing of the method, performed by a D-optimal design, and system suitability criteria allowed a control strategy to be designed. The optimized method was validated following ICH Guideline Q2(R1) and was applied to a real sample of ALM coated tablets. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Collins, P. C.; Koduri, S.; Dixit, V.; Fraser, H. L.
2018-03-01
The fracture toughness of a material depends upon the material's composition and microstructure, as well as other material properties operating at the continuum level. The interrelationships between these variables are complex, and thus difficult to interpret, especially in multi-component, multi-phase ductile engineering alloys such as α/β-processed Ti-6Al-4V (nominal composition, wt pct). Neural networks have been used to elucidate how variables such as composition and microstructure influence the fracture toughness directly ( i.e., via a crack initiation or propagation mechanism)—and independent of the influence of the same variables influence on the yield strength and plasticity of the material. The variables included in the models and analysis include (i) alloy composition, specifically, Al, V, O, and Fe; (ii) materials microstructure, including phase fractions and average sizes of key microstructural features; (iii) the yield strength and reduction in area obtained from uniaxial tensile tests; and (iv) an assessment of the degree to which plane strain conditions were satisfied by including a factor related to the plane strain thickness. Once trained, virtual experiments have been conducted which permit the determination of each variable's functional dependency on the resulting fracture toughness. Given that the database includes both K 1 C and K Q values, as well as the in-plane component of the stress state of the crack tip, it is possible to quantitatively assess the effect of sample thickness on K Q and the degree to which the K Q and K 1 C values may vary. These interpretations drawn by comparing multiple neural networks have a significant impact on the general understanding of how the microstructure influences the fracture toughness in ductile materials, as well as an ability to predict the fracture toughness of α/β-processed Ti-6Al-4V.
Optimisation of Noosa BNR plant to improve performance and reduce operating costs.
Thomas, M; Wright, P; Blackall, L; Urbain, V; Keller, J
2003-01-01
Noosa WWTP is publicly owned and privately operated by Australian Water Services. The process includes primary sedimentation, raw sludge fermentation, biological nutrient removal (BNR), sand filtration and ultraviolet (UV) disinfection. An innovative feature of the plant is the supplementary carbon dosing facility to avoid the use of metal salts (alum or ferric) for phosphorus removal. The average flow treated during 2000 was 9.0 ML/d. The annual 50 percentile effluent quality requirements for nutrients are total N < 5 mg/L and total P < 1 mg/L. The objectives of this project were to: determine the cause of variability in phosphorus removal; develop a strategy to control the variability in phosphorus removal; and minimise the operating cost of supplementary carbon dosing while achieving the effluent quality requirements. An investigation of chemical and microbiological parameters was implemented and it was concluded that there were several factors causing variability in phosphorus removal, rather than a single cause. The following four major causes were identified, and the control strategies that were adopted resulted in the plant achieving annual 50 percentile effluent total P = 0.37 mg/L and total N = 3.0 mg/L during 2001. First, phosphorus removal was limited by the available VFA supply due to consumption of VFA by other organisms competing with phosphate accumulating organisms (PAO), and due to diurnal variations in the sewage VFA and phosphate concentrations. Therefore, supplementary carbon dosing was essential to make allowance for competing reactions. Second, increasing the fermenter VFA yield via supplementary carbon dosing with molasses was found to be an effective and economic way of ensuring reliable phosphorus removal. Third, nitrate in the RAS resulted in consumption of VFA by denitrifying bacteria, particularly with process configurations where the RAS was recycled directly into the anaerobic zone. Incorporating a RAS denitrification zone into the process rectified this problem. Finally, glycogen accumulating organisms (GAO) were observed in BNR sludge samples, and consumption of VFA by GAO appeared to cause decreased phosphorus removal. Better phosphorus removal was obtained using VFA derived from the fermenter than dosing an equivalent amount of acetic acid. It was hypothesized that GAO have a competitive advantage to use acetate and PAO have a competitive advantage to use propionate, butyrate or some other soluble COD compound in the fermenter effluent. Contrary to popular belief, acetate may not be the optimum VFA for biological phosphorus removal. The competition between PAO and GAO for different VFA species under anaerobic conditions requires further investigation in order to control the growth of GAO and thereby improve reliability of biological phosphorus removal processes.
High-Dimensional Bayesian Geostatistics
Banerjee, Sudipto
2017-01-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920
Operations management tools to be applied for textile
NASA Astrophysics Data System (ADS)
Maralcan, A.; Ilhan, I.
2017-10-01
In this paper, basic concepts of process analysis such as flow time, inventory, bottleneck, labour cost and utilization are illustrated first. The effect of bottleneck on the results of a business are especially emphasized. In the next section, tools on productivity measurement; KPI (Key Performance Indicators) Tree, OEE (Overall Equipment Effectiveness) and Takt Time are introduced and exemplified. KPI tree is a diagram on which we can visualize all the variables of an operation which are driving financial results through cost and profit. OEE is a tool to measure a potential extra capacity of an equipment or an employee. Takt time is a tool to determine the process flow rate according to the customer demand. KPI tree is studied through the whole process while OEE is exemplified for a stenter frame machine which is the most important machine (and usually the bottleneck) and the most expensive investment in a finishing plant. Takt time is exemplified for the quality control department. Finally quality tools, six sigma, control charts and jidoka are introduced. Six sigma is a tool to measure process capability and by the way probability of a defect. Control chart is a powerful tool to monitor the process. The idea of jidoka (detect, stop and alert) is about alerting the people that there is a problem in the process.