Guiding gate-etch process development using 3D surface reaction modeling for 7nm and beyond
NASA Astrophysics Data System (ADS)
Dunn, Derren; Sporre, John R.; Deshpande, Vaibhav; Oulmane, Mohamed; Gull, Ronald; Ventzek, Peter; Ranjan, Alok
2017-03-01
Increasingly, advanced process nodes such as 7nm (N7) are fundamentally 3D and require stringent control of critical dimensions over high aspect ratio features. Process integration in these nodes requires a deep understanding of complex physical mechanisms to control critical dimensions from lithography through final etch. Polysilicon gate etch processes are critical steps in several device architectures for advanced nodes that rely on self-aligned patterning approaches to gate definition. These processes are required to meet several key metrics: (a) vertical etch profiles over high aspect ratios; (b) clean gate sidewalls free of etch process residue; (c) minimal erosion of liner oxide films protecting key architectural elements such as fins; and (e) residue free corners at gate interfaces with critical device elements. In this study, we explore how hybrid modeling approaches can be used to model a multi-step finFET polysilicon gate etch process. Initial parts of the patterning process through hardmask assembly are modeled using process emulation. Important aspects of gate definition are then modeled using a particle Monte Carlo (PMC) feature scale model that incorporates surface chemical reactions.1 When necessary, species and energy flux inputs to the PMC model are derived from simulations of the etch chamber. The modeled polysilicon gate etch process consists of several steps including a hard mask breakthrough step (BT), main feature etch steps (ME), and over-etch steps (OE) that control gate profiles at the gate fin interface. An additional constraint on this etch flow is that fin spacer oxides are left intact after final profile tuning steps. A natural optimization required from these processes is to maximize vertical gate profiles while minimizing erosion of fin spacer films.2
Teaching Statistics from the Operating Table: Minimally Invasive and Maximally Educational
ERIC Educational Resources Information Center
Nowacki, Amy S.
2015-01-01
Statistics courses that focus on data analysis in isolation, discounting the scientific inquiry process, may not motivate students to learn the subject. By involving students in other steps of the inquiry process, such as generating hypotheses and data, students may become more interested and vested in the analysis step. Additionally, such an…
A Selection Method That Succeeds!
ERIC Educational Resources Information Center
Weitman, Catheryn J.
Provided a structural selection method is carried out, it is possible to find quality early childhood personnel. The hiring process involves five definite steps, each of which establishes a base for the next. A needs assessment formulating basic minimal qualifications is the first step. The second step involves review of current job descriptions…
Direct write with microelectronic circuit fabrication
Drummond, T.; Ginley, D.
1988-05-31
In a process for deposition of material onto a substrate, for example, the deposition of metals for dielectrics onto a semiconductor laser, the material is deposited by providing a colloidal suspension of the material and directly writing the suspension onto the substrate surface by ink jet printing techniques. This procedure minimizes the handling requirements of the substrate during the deposition process and also minimizes the exchange of energy between the material to be deposited and the substrate at the interface. The deposited material is then resolved into a desired pattern, preferably by subjecting the deposit to a laser annealing step. The laser annealing step provides high resolution of the resultant pattern while minimizing the overall thermal load of the substrate and permitting precise control of interface chemistry and interdiffusion between the substrate and the deposit. 3 figs.
Direct write with microelectronic circuit fabrication
Drummond, Timothy; Ginley, David
1992-01-01
In a process for deposition of material onto a substrate, for example, the deposition of metals or dielectrics onto a semiconductor laser, the material is deposited by providing a colloidal suspension of the material and directly writing the suspension onto the substrate surface by ink jet printing techniques. This procedure minimizes the handling requirements of the substrate during the deposition process and also minimizes the exchange of energy between the material to be deposited and the substrate at the interface. The deposited material is then resolved into a desired pattern, preferably by subjecting the deposit to a laser annealing step. The laser annealing step provides high resolution of the resultant pattern while minimizing the overall thermal load of the substrate and permitting precise control of interface chemistry and interdiffusion between the substrate and the deposit.
Waste Management Decision-Making Process During a Homeland Security Incident Response
A step-by-step guide on how to make waste management-related decisions including how waste can be minimized, collected and treated, as well as where waste can be sent for staging, storage and final disposal.
Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian
2017-10-10
The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.
Referent control and motor equivalence of reaching from standing
Tomita, Yosuke; Feldman, Anatol G.
2016-01-01
Motor actions may result from central changes in the referent body configuration, defined as the body posture at which muscles begin to be activated or deactivated. The actual body configuration deviates from the referent configuration, particularly because of body inertia and environmental forces. Within these constraints, the system tends to minimize the difference between these configurations. For pointing movement, this strategy can be expressed as the tendency to minimize the difference between the referent trajectory (RT) and actual trajectory (QT) of the effector (hand). This process may underlie motor equivalent behavior that maintains the pointing trajectory regardless of the number of body segments involved. We tested the hypothesis that the minimization process is used to produce pointing in standing subjects. With eyes closed, 10 subjects reached from a standing position to a remembered target located beyond arm length. In randomly chosen trials, hip flexion was unexpectedly prevented, forcing subjects to take a step during pointing to prevent falling. The task was repeated when subjects were instructed to intentionally take a step during pointing. In most cases, reaching accuracy and trajectory curvature were preserved due to adaptive condition-specific changes in interjoint coordination. Results suggest that referent control and the minimization process associated with it may underlie motor equivalence in pointing. NEW & NOTEWORTHY Motor actions may result from minimization of the deflection of the actual body configuration from the centrally specified referent body configuration, in the limits of neuromuscular and environmental constraints. The minimization process may maintain reaching trajectory and accuracy regardless of the number of body segments involved (motor equivalence), as confirmed in this study of reaching from standing in young healthy individuals. Results suggest that the referent control process may underlie motor equivalence in reaching. PMID:27784802
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Processing. 640.81 Section 640.81 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS ADDITIONAL...) Microbial contamination. All processing steps shall be conducted in a manner to minimize the risk of...
Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?
NASA Astrophysics Data System (ADS)
Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2018-01-01
Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.
Speckle evolution with multiple steps of least-squares phase removal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Mingzhou; Dainty, Chris; Roux, Filippus S.
2011-08-15
We study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both vortex density and scintillation index. Statistical results show that almost all the optical vortices can be removed from a speckle field, which finally decays into a quasiplane wave after such an iterative process.
The touro 12-step: a systematic guide to optimizing survey research with online discussion boards.
Ip, Eric J; Barnett, Mitchell J; Tenerowicz, Michael J; Perry, Paul J
2010-05-27
The Internet, in particular discussion boards, can provide a unique opportunity for recruiting participants in online research surveys. Despite its outreach potential, there are significant barriers which can limit its success. Trust, participation, and visibility issues can all hinder the recruitment process; the Touro 12-Step was developed to address these potential hurdles. By following this step-by-step approach, researchers will be able to minimize these pitfalls and maximize their recruitment potential via online discussion boards.
Procedure for minimizing the cost per watt of photovoltaic systems
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance
Patterning control strategies for minimum edge placement error in logic devices
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim
2017-03-01
In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.
Uyttendaele, M; Neyts, K; Vanderswalmen, H; Notebaert, E; Debevere, J
2004-02-01
Aeromonas is an opportunistic pathogen, which, although in low numbers, may be present on minimally processed vegetables. Although the intrinsic and extrinsic factors of minimally processed prepacked vegetable mixes are not inhibitory to the growth of Aeromonas species, multiplication to high numbers during processing and storage of naturally contaminated grated carrots, mixed lettuce, and chopped bell peppers was not observed. Aeromonas was shown to be resistant towards chlorination of water, but was susceptible to 1% and 2% lactic acid and 0.5% and 1.0% thyme essential oil treatment, although the latter provoked adverse sensory properties when applied for decontamination of chopped bell peppers. Integration of a decontamination step with 2% lactic acid in the processing line of grated carrots was shown to have the potential to control the overall microbial quality of the grated carrots and was particularly effective towards Aeromonas.
Advanced imaging programs: maximizing a multislice CT investment.
Falk, Robert
2008-01-01
Advanced image processing has moved from a luxury to a necessity in the practice of medicine. A hospital's adoption of sophisticated 3D imaging entails several important steps with many factors to consider in order to be successful. Like any new hospital program, 3D post-processing should be introduced through a strategic planning process that includes administrators, physicians, and technologists to design, implement, and market a program that is scalable-one that minimizes up front costs while providing top level service. This article outlines the steps for planning, implementation, and growth of an advanced imaging program.
The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns
NASA Astrophysics Data System (ADS)
Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo
Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.
Neural networks for vertical microcode compaction
NASA Astrophysics Data System (ADS)
Chu, Pong P.
1992-09-01
Neural networks provide an alternative way to solve complex optimization problems. Instead of performing a program of instructions sequentially as in a traditional computer, neural network model explores many competing hypotheses simultaneously using its massively parallel net. The paper shows how to use the neural network approach to perform vertical micro-code compaction for a micro-programmed control unit. The compaction procedure includes two basic steps. The first step determines the compatibility classes and the second step selects a minimal subset to cover the control signals. Since the selection process is an NP- complete problem, to find an optimal solution is impractical. In this study, we employ a customized neural network to obtain the minimal subset. We first formalize this problem, and then define an `energy function' and map it to a two-layer fully connected neural network. The modified network has two types of neurons and can always obtain a valid solution.
An optimal open/closed-loop control method with application to a pre-stressed thin duralumin plate
NASA Astrophysics Data System (ADS)
Nadimpalli, Sruthi Raju
The excessive vibrations of a pre-stressed duralumin plate, suppressed by a combination of open-loop and closed-loop controls, also known as open/closed-loop control, is studied in this thesis. The two primary steps involved in this process are: Step (I) with an assumption that the closed-loop control law is proportional, obtain the optimal open-loop control by direct minimization of the performance measure consisting of energy at terminal time and a penalty on open-loop control force via calculus of variations. If the performance measure also involves a penalty on closed-loop control effort then a Fourier based method is utilized. Step (II) the energy at terminal time is minimized numerically to obtain optimal values of feedback gains. The optimal closed-loop control gains obtained are used to describe the displacement and the velocity of open-loop, closed-loop and open/closed-loop controlled duralumin plate.
Rushford, Michael C.
2002-01-01
An optical monitoring instrument monitors etch depth and etch rate for controlling a wet-etching process. The instrument provides means for viewing through the back side of a thick optic onto a nearly index-matched interface. Optical baffling and the application of a photoresist mask minimize spurious reflections to allow for monitoring with extremely weak signals. A Wollaston prism enables linear translation for phase stepping.
,
2002-01-01
The National Elevation Dataset (NED) is a new raster product assembled by the U.S. Geological Survey. NED is designed to provide National elevation data in a seamless form with a consistent datum, elevation unit, and projection. Data corrections were made in the NED assembly process to minimize artifacts, perform edge matching, and fill sliver areas of missing data. NED has a resolution of one arc-second (approximately 30 meters) for the conterminous United States, Hawaii, Puerto Rico and the island territories and a resolution of two arc-seconds for Alaska. NED data sources have a variety of elevation units, horizontal datums, and map projections. In the NED assembly process the elevation values are converted to decimal meters as a consistent unit of measure, NAD83 is consistently used as horizontal datum, and all the data are recast in a geographic projection. Older DEM's produced by methods that are now obsolete have been filtered during the NED assembly process to minimize artifacts that are commonly found in data produced by these methods. Artifact removal greatly improves the quality of the slope, shaded-relief, and synthetic drainage information that can be derived from the elevation data. Figure 2 illustrates the results of this artifact removal filtering. NED processing also includes steps to adjust values where adjacent DEM's do not match well, and to fill sliver areas of missing data between DEM's. These processing steps ensure that NED has no void areas and artificial discontinuities have been minimized. The artifact removal filtering process does not eliminate all of the artifacts. In areas where the only available DEM is produced by older methods, then "striping" may still occur.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
Bacterial Stressors in Minimally Processed Food
Capozzi, Vittorio; Fiocco, Daniela; Amodio, Maria Luisa; Gallone, Anna; Spano, Giuseppe
2009-01-01
Stress responses are of particular importance to microorganisms, because their habitats are subjected to continual changes in temperature, osmotic pressure, and nutrients availability. Stressors (and stress factors), may be of chemical, physical, or biological nature. While stress to microorganisms is frequently caused by the surrounding environment, the growth of microbial cells on its own may also result in induction of some kinds of stress such as starvation and acidity. During production of fresh-cut produce, cumulative mild processing steps are employed, to control the growth of microorganisms. Pathogens on plant surfaces are already stressed and stress may be increased during the multiple mild processing steps, potentially leading to very hardy bacteria geared towards enhanced survival. Cross-protection can occur because the overlapping stress responses enable bacteria exposed to one stress to become resistant to another stress. A number of stresses have been shown to induce cross protection, including heat, cold, acid and osmotic stress. Among other factors, adaptation to heat stress appears to provide bacterial cells with more pronounced cross protection against several other stresses. Understanding how pathogens sense and respond to mild stresses is essential in order to design safe and effective minimal processing regimes. PMID:19742126
Antimicrobial packaging for fresh-cut fruits
USDA-ARS?s Scientific Manuscript database
Fresh-cut fruits are minimally processed produce which are consumed directly at their fresh stage without any further kill step. Microbiological quality and safety are major challenges to fresh-cut fruits. Antimicrobial packaging is one of the innovative food packaging systems that is able to kill o...
Torre, Michele; Digka, Nikoletta; Anastasopoulou, Aikaterini; Tsangaris, Catherine; Mytilineou, Chryssi
2016-12-15
Research studies on the effects of microlitter on marine biota have become more and more frequent the last few years. However, there is strong evidence that scientific results based on microlitter analyses can be biased by contamination from air transported fibres. This study demonstrates a low cost and easy to apply methodology to minimize the background contamination and thus to increase results validity. The contamination during the gastrointestinal content analysis of 400 fishes was tested for several sample processing steps of high risk airborne contamination (e.g. dissection, stereomicroscopic analysis, and chemical digestion treatment for microlitter extraction). It was demonstrated that, using our methodology based on hermetic enclosure devices, isolating the working areas during the various processing steps, airborne contamination reduced by 95.3%. The simplicity and low cost of this methodology provide the benefit that it could be applied not only to laboratory but also to field or on board work. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Use of EPI-Splines to Model Empirical Semivariograms for Optimal Spatial Estimation
2016-09-01
proliferation of unmanned systems in military and civilian sectors has occurred at lightning speed. In the case of Autonomous Underwater Vehicles or...SLAM is a method of position estimation that relies on map data [3]. In this process, the creation of the map occurs as the vehicle is navigating the...that ensures minimal errors. This technique is accomplished in two steps. The first step is creation of the semivariogram. The semivariogram is a
Mesoscopic homogenization of semi-insulating GaAs by two-step post growth annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, B.; Jurisch, M.; Koehler, A.
1996-12-31
Mesoscopic homogenization of the electrical properties of s.i. LEC-GaAs is commonly realized by thermal treatment of the crystals including the steps of dissolution of arsenic precipitates, homogenization of excess As and re-precipitation by creating a controlled supersaturation. Caused by the inhomogeneous distribution of dislocations and the corresponding cellular structure along and across LEC-grown crystals a proper choice of the time-temperature program is necessary to minimize fluctuations of mesoscopic homogeneity. A modified two-step ingot annealing process is demonstrated to ensure the homogeneous distribution of mesoscopic homogeneity.
ISITE: Automatic Circuit Synthesis for Double-Metal CMOS VLSI (Very Large Scale Integrated) Circuits
1989-12-01
rows and columns should be minimized. There are two methodologies for achieving this objective, namely, logic minimization to I I I 15 I A B C D E T...type and N-type polysilicon (Figure 2.5( b )) and interconnecting the gates with metal at a later I processing step. The two layers of aluminum available...polysiliconI ...... .. ... .. .. . .. ... .. ... .. I N polysilicon Iii~~iiiiiiii~~iiiiii (a) ( b ) 3 Figure 2.5. Controlling the Threshold Voltage in
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zhanying; Key Laboratory for Anisotropy and Texture of Materials, Northeastern University, Shenyang 110819, China,; Zhao, Gang
2015-04-15
The effect of two-step homogenization treatments on the precipitation behavior of Al{sub 3}Zr dispersoids was investigated by transmission electron microscopy (TEM) in 7150 alloys. Two-step treatments with the first step in the temperature range of 300–400 °C followed by the second step at 470 °C were applied during homogenization. Compared with the conventional one-step homogenization, both a finer particle size and a higher number density of Al{sub 3}Zr dispersoids were obtained with two-step homogenization treatments. The most effective dispersoid distribution was attained using the first step held at 300 °C. In addition, the two-step homogenization minimized the precipitate free zonesmore » and greatly increased the number density of dispersoids near dendrite grain boundaries. The effect of two-step homogenization on recrystallization resistance of 7150 alloys with different Zr contents was quantitatively analyzed using the electron backscattered diffraction (EBSD) technique. It was found that the improved dispersoid distribution through the two-step treatment can effectively inhibit the recrystallization process during the post-deformation annealing for 7150 alloys containing 0.04–0.09 wt.% Zr, resulting in a remarkable reduction of the volume fraction and grain size of recrystallization grains. - Highlights: • Effect of two-step homogenization on Al{sub 3}Zr dispersoids was investigated by TEM. • Finer and higher number of dispersoids obtained with two-step homogenization • Minimized the precipitate free zones and improved the dispersoid distribution • Recrystallization resistance with varying Zr content was quantified by EBSD. • Effectively inhibit the recrystallization through two-step treatments in 7150 alloy.« less
NASA Astrophysics Data System (ADS)
Cavanaugh, C.; Gille, J.; Francis, G.; Nardi, B.; Hannigan, J.; McInerney, J.; Krinsky, C.; Barnett, J.; Dean, V.; Craig, C.
2005-12-01
The High Resolution Dynamics Limb Sounder (HIRDLS) instrument onboard the NASA Aura spacecraft experienced a rupture of the thermal blanketing material (Kapton) during the rapid depressurization of launch. The Kapton draped over the HIRDLS scan mirror, severely limiting the aperture through which HIRDLS views space and Earth's atmospheric limb. In order for HIRDLS to achieve its intended measurement goals, rapid characterization of the anomaly, and rapid recovery from it were required. The recovery centered around a new processing module inserted into the standard HIRDLS processing scheme, with a goal of minimizing the effect of the anomaly on the already existing processing modules. We describe the software infrastructure on which the new processing module was built, and how that infrastructure allows for rapid application development and processing response. The scope of the infrastructure spans three distinct anomaly recovery steps and the means for their intercommunication. Each of the three recovery steps (removing the Kapton-induced oscillation in the radiometric signal, removing the Kapton signal contamination upon the radiometric signal, and correcting for the partially-obscured atmospheric view) is completely modularized and insulated from the other steps, allowing focused and rapid application development towards a specific step, and neutralizing unintended inter-step influences, thus greatly shortening the design-development-test lifecycle. The intercommunication is also completely modularized and has a simple interface to which the three recovery steps adhere, allowing easy modification and replacement of specific recovery scenarios, thereby heightening the processing response.
Ito, Toshiaki
2015-07-01
An apparent advantage of minimally invasive mitral surgery through right mini-thoracotomy is cosmetic appearance. Possible advantages of this procedure are a shorter ventilation time, shorter hospital stay, and less blood transfusion. With regard to hard endpoints, such as operative mortality, freedom from reoperation, or cardiac death, this method is reportedly equivalent, but not superior, to the standard median sternotomy technique. However, perfusion-related complications (e.g., stroke, vascular damage, and limb ischemia) tend to occur more frequently in minimally invasive technique than with the standard technique. In addition, valve repair through a small thoracotomy is technically demanding. Therefore, screening out patients who are not appropriate for performing minimally invasive surgery is the first step. Vascular disease and inadequate anatomy can be evaluated with contrast-enhanced computed tomography. Peripheral cannulation should be carefully performed, using transesophageal echocardiography guidance. Preoperative detailed planning of the valve repair process is desirable because every step is time-consuming in minimally invasive surgery. Three-dimensional echocardiography is a powerful tool for this purpose. For satisfactory exposure and detailed observation of the valve, a special left atrial retractor and high-definition endoscope are useful. Valve repair can be performed in minimally invasive surgery as long as cardiopulmonary bypass is stable and bloodless exposure of the valve is obtained.
NASA Technical Reports Server (NTRS)
Russell, P. L.; Beal, G. W.; Sederquist, R. A.; Shultz, D.
1981-01-01
Rich-lean combustor concepts designed to enhance rich combustion chemistry and increase combustor flexibility for NO(x) reduction with minimally processed fuels are examined. Processes such as rich product recirculation in the rich chamber, rich-lean annihilation, and graduated air addition or staged rich combustion to release bound nitrogen in steps of reduced equivalence ratio are discussed. Variations to the baseline rapid quench section are considered, and the effect of residence time in the rich zone is investigated. The feasibility of using uncooled non-metallic materials for the rich zone combustion construction is also addressed. The preliminary results indicate that rich primary zone staged combustion provides environmentally acceptable operation with residual and/or synthetic coal-derived liquid fuels
Bachelli, Mara Lígia Biazotto; Amaral, Rívia Darla Álvares; Benedetti, Benedito Carlos
2013-01-01
Lettuce is a leafy vegetable widely used in industry for minimally processed products, in which the step of sanitization is the crucial moment for ensuring a safe food for consumption. Chlorinated compounds, mainly sodium hypochlorite, are the most used in Brazil, but the formation of trihalomethanes from this sanitizer is a drawback. Then, the search for alternative methods to sodium hypochlorite has been emerging as a matter of great interest. The suitability of chlorine dioxide (60 mg L−1/10 min), peracetic acid (100 mg L−1/15 min) and ozonated water (1.2 mg L−1 /1 min) as alternative sanitizers to sodium hypochlorite (150 mg L−1 free chlorine/15 min) were evaluated. Minimally processed lettuce washed with tap water for 1 min was used as a control. Microbiological analyses were performed in triplicate, before and after sanitization, and at 3, 6, 9 and 12 days of storage at 2 ± 1 °C with the product packaged on LDPE bags of 60 μm. It was evaluated total coliforms, Escherichia coli, Salmonella spp., psicrotrophic and mesophilic bacteria, yeasts and molds. All samples of minimally processed lettuce showed absence of E. coli and Salmonella spp. The treatments of chlorine dioxide, peracetic acid and ozonated water promoted reduction of 2.5, 1.1 and 0.7 log cycle, respectively, on count of microbial load of minimally processed product and can be used as substitutes for sodium hypochlorite. These alternative compounds promoted a shelf-life of six days to minimally processed lettuce, while the shelf-life with sodium hypochlorite was 12 days. PMID:24516433
Bachelli, Mara Lígia Biazotto; Amaral, Rívia Darla Álvares; Benedetti, Benedito Carlos
2013-01-01
Lettuce is a leafy vegetable widely used in industry for minimally processed products, in which the step of sanitization is the crucial moment for ensuring a safe food for consumption. Chlorinated compounds, mainly sodium hypochlorite, are the most used in Brazil, but the formation of trihalomethanes from this sanitizer is a drawback. Then, the search for alternative methods to sodium hypochlorite has been emerging as a matter of great interest. The suitability of chlorine dioxide (60 mg L(-1)/10 min), peracetic acid (100 mg L(-1)/15 min) and ozonated water (1.2 mg L(-1)/1 min) as alternative sanitizers to sodium hypochlorite (150 mg L(-1) free chlorine/15 min) were evaluated. Minimally processed lettuce washed with tap water for 1 min was used as a control. Microbiological analyses were performed in triplicate, before and after sanitization, and at 3, 6, 9 and 12 days of storage at 2 ± 1 °C with the product packaged on LDPE bags of 60 μm. It was evaluated total coliforms, Escherichia coli, Salmonella spp., psicrotrophic and mesophilic bacteria, yeasts and molds. All samples of minimally processed lettuce showed absence of E. coli and Salmonella spp. The treatments of chlorine dioxide, peracetic acid and ozonated water promoted reduction of 2.5, 1.1 and 0.7 log cycle, respectively, on count of microbial load of minimally processed product and can be used as substitutes for sodium hypochlorite. These alternative compounds promoted a shelf-life of six days to minimally processed lettuce, while the shelf-life with sodium hypochlorite was 12 days.
Minimization of diauxic growth lag-phase for high-efficiency biogas production.
Kim, Min Jee; Kim, Sang Hun
2017-02-01
The objective of this study was to develop a minimization method of a diauxic growth lag-phase for the biogas production from agricultural by-products (ABPs). Specifically, the effects of proximate composition on the biogas production and degradation rates of the ABPs were investigated, and a new method based on proximate composition combinations was developed to minimize the diauxic growth lag-phase. Experiments were performed using biogas potential tests at a substrate loading of 2.5 g VS/L and feed to microorganism ratio (F/M) of 0.5 under the mesophilic condition. The ABPs were classified based on proximate composition (carbohydrate, protein, and fat etc.). The biogas production patterns, lag phase, and times taken for 90% biogas production (T90) were used for the evaluation of the biogas production with biochemical methane potential (BMP) test. The high- or medium-carbohydrate and low-fat ABPs (cheese whey, cabbage, and skim milk) showed a single step digestion process and low-carbohydrate and high-fat ABPs (bean curd and perilla seed) showed a two-step digestion process. The mixture of high-fat ABPs and high-carbohydrate ABPs reduced the lag-phase and increased the biogas yield more than that from single ABP by 35-46%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gjoka, Xhorxhi; Gantier, Rene; Schofield, Mark
2017-01-20
The goal of this study was to adapt a batch mAb purification chromatography platform for continuous operation. The experiments and rationale used to convert from batch to continuous operation are described. Experimental data was used to design chromatography methods for continuous operation that would exceed the threshold for critical quality attributes and minimize the consumables required as compared to batch mode of operation. Four unit operations comprising of Protein A capture, viral inactivation, flow-through anion exchange (AEX), and mixed-mode cation exchange chromatography (MMCEX) were integrated across two Cadence BioSMB PD multi-column chromatography systems in order to process a 25L volume of harvested cell culture fluid (HCCF) in less than 12h. Transfer from batch to continuous resulted in an increase in productivity of the Protein A step from 13 to 50g/L/h and of the MMCEX step from 10 to 60g/L/h with no impact on the purification process performance in term of contaminant removal (4.5 log reduction of host cell proteins, 50% reduction in soluble product aggregates) and overall chromatography process yield of recovery (75%). The increase in productivity, combined with continuous operation, reduced the resin volume required for Protein A and MMCEX chromatography by more than 95% compared to batch. The volume of AEX membrane required for flow through operation was reduced by 74%. Moreover, the continuous process required 44% less buffer than an equivalent batch process. This significant reduction in consumables enables cost-effective, disposable, single-use manufacturing. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Low NO sub x heavy fuel combustor concept program
NASA Technical Reports Server (NTRS)
Russell, P.; Beal, G.; Hinton, B.
1981-01-01
A gas turbine technology program to improve and optimize the staged rich lean low NOx combustor concept is described. Subscale combustor tests to develop the design information for optimization of the fuel preparation, rich burn, quick air quench, and lean burn steps of the combustion process were run. The program provides information for the design of high pressure full scale gas turbine combustors capable of providing environmentally clean combustion of minimally of minimally processed and synthetic fuels. It is concluded that liquid fuel atomization and mixing, rich zone stoichiometry, rich zone liner cooling, rich zone residence time, and quench zone stoichiometry are important considerations in the design and scale up of the rich lean combustor.
NASA Technical Reports Server (NTRS)
Clendaniel, R. A.; Lasker, D. M.; Minor, L. B.; Shelhamer, M. J. (Principal Investigator)
2001-01-01
The horizontal angular vestibuloocular reflex (VOR) evoked by sinusoidal rotations from 0.5 to 15 Hz and acceleration steps up to 3,000 degrees /s(2) to 150 degrees /s was studied in six squirrel monkeys following adaptation with x2.2 magnifying and x0.45 minimizing spectacles. For sinusoidal rotations with peak velocities of 20 degrees /s, there were significant changes in gain at all frequencies; however, the greatest gain changes occurred at the lower frequencies. The frequency- and velocity-dependent gain enhancement seen in normal monkeys was accentuated following adaptation to magnifying spectacles and diminished with adaptation to minimizing spectacles. A differential increase in gain for the steps of acceleration was noted after adaptation to the magnifying spectacles. The gain during the acceleration portion, G(A), of a step of acceleration (3,000 degrees /s(2) to 150 degrees /s) increased from preadaptation values of 1.05 +/- 0.08 to 1.96 +/- 0.16, while the gain during the velocity plateau, G(V), only increased from 0.93 +/- 0.04 to 1.36 +/- 0.08. Polynomial fits to the trajectory of the response during the acceleration step revealed a greater increase in the cubic than the linear term following adaptation with the magnifying lenses. Following adaptation to the minimizing lenses, the value of G(A) decreased to 0.61 +/- 0.08, and the value of G(V) decreased to 0.59 +/- 0.09 for the 3,000 degrees /s(2) steps of acceleration. Polynomial fits to the trajectory of the response during the acceleration step revealed that there was a significantly greater reduction in the cubic term than in the linear term following adaptation with the minimizing lenses. These findings indicate that there is greater modification of the nonlinear as compared with the linear component of the VOR with spectacle-induced adaptation. In addition, the latency to the onset of the adapted response varied with the dynamics of the stimulus. The findings were modeled with a bilateral model of the VOR containing linear and nonlinear pathways that describe the normal behavior and adaptive processes. Adaptation for the linear pathway is described by a transfer function that shows the dependence of adaptation on the frequency of the head movement. The adaptive process for the nonlinear pathway is a gain enhancement element that provides for the accentuated gain with rising head velocity and the increased cubic component of the responses to steps of acceleration. While this model is substantially different from earlier models of VOR adaptation, it accounts for the data in the present experiments and also predicts the findings observed in the earlier studies.
Systematic procedure for designing processes with multiple environmental objectives.
Kim, Ki-Joo; Smith, Raymond L
2005-04-01
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.
The Automated Array Assembly Task of the Low-cost Silicon Solar Array Project, Phase 2
NASA Technical Reports Server (NTRS)
Coleman, M. G.; Grenon, L.; Pastirik, E. M.; Pryor, R. A.; Sparks, T. G.
1978-01-01
An advanced process sequence for manufacturing high efficiency solar cells and modules in a cost-effective manner is discussed. Emphasis is on process simplicity and minimizing consumed materials. The process sequence incorporates texture etching, plasma processes for damage removal and patterning, ion implantation, low pressure silicon nitride deposition, and plated metal. A reliable module design is presented. Specific process step developments are given. A detailed cost analysis was performed to indicate future areas of fruitful cost reduction effort. Recommendations for advanced investigations are included.
Disaster Preparedness Manual and Workbook for Pennsylvania Libraries and Archives.
ERIC Educational Resources Information Center
Swan, Elizabeth, Ed.; And Others
This document suggests components for a sound disaster plan for libraries and archives. The planning process includes four steps which are covered in this manual: educating the staff about disaster preparedness literature; planning to prevent disasters; preparing to respond to an emergency and minimize its effects; and planning how to restore…
Facilitating Lasting Changes at an Elementary School
ERIC Educational Resources Information Center
James, Laurie
2016-01-01
The purpose of this study was to determine how to minimize waste in a school setting by reducing, reusing, recycling, and composting waste products. Specifically, the desire was to identify what steps could be taken to decrease waste practices at a Title I elementary school. Through the Washington Green Schools certification process, a Waste and…
Psychoacoustic processing of test signals
NASA Astrophysics Data System (ADS)
Kadlec, Frantisek
2003-10-01
For the quantitative evaluation of electroacoustic system properties and for psychoacoustic testing it is possible to utilize harmonic signals with fixed frequency, sweeping signals, random signals or their combination. This contribution deals with the design of various test signals with emphasis on audible perception. During the digital generation of signals, some additional undesirable frequency components and noise are produced, which are dependent on signal amplitude and sampling frequency. A mathematical analysis describes the origin of this distortion. By proper selection of signal frequency and amplitude it is possible to minimize those undesirable components. An additional step is to minimize the audible perception of this signal distortion by the application of additional noise (dither). For signals intended for listening tests a dither with triangular or Gaussian probability density function was found to be most effective. Signals modified this way may be further improved by the application of noise shaping, which transposes those undesirable products into frequency regions where they are perceived less, according to psychoacoustic principles. The efficiency of individual processing steps was confirmed both by measurements and by listening tests. [Work supported by the Czech Science Foundation.
Linear-Quadratic-Gaussian Regulator Developed for a Magnetic Bearing
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.
2002-01-01
Linear-Quadratic-Gaussian (LQG) control is a modern state-space technique for designing optimal dynamic regulators. It enables us to trade off regulation performance and control effort, and to take into account process and measurement noise. The Structural Mechanics and Dynamics Branch at the NASA Glenn Research Center has developed an LQG control for a fault-tolerant magnetic bearing suspension rig to optimize system performance and to reduce the sensor and processing noise. The LQG regulator consists of an optimal state-feedback gain and a Kalman state estimator. The first design step is to seek a state-feedback law that minimizes the cost function of regulation performance, which is measured by a quadratic performance criterion with user-specified weighting matrices, and to define the tradeoff between regulation performance and control effort. The next design step is to derive a state estimator using a Kalman filter because the optimal state feedback cannot be implemented without full state measurement. Since the Kalman filter is an optimal estimator when dealing with Gaussian white noise, it minimizes the asymptotic covariance of the estimation error.
Mitigation Strategies To Protect Food Against Intentional Adulteration. Final rule.
2016-05-27
The Food and Drug Administration (FDA or we) is issuing this final rule to require domestic and foreign food facilities that are required to register under the Federal Food, Drug, and Cosmetic Act (the FD&C Act) to address hazards that may be introduced with the intention to cause wide scale public health harm. These food facilities are required to conduct a vulnerability assessment to identify significant vulnerabilities and actionable process steps and implement mitigation strategies to significantly minimize or prevent significant vulnerabilities identified at actionable process steps in a food operation. FDA is issuing these requirements as part of our implementation of the FDA Food Safety Modernization Act (FSMA).
Hughson, Michael D; Cruz, Thayana A; Carvalho, Rimenys J; Castilho, Leda R
2017-07-01
The pressures to efficiently produce complex biopharmaceuticals at reduced costs are driving the development of novel techniques, such as in downstream processing with straight-through processing (STP). This method involves directly and sequentially purifying a particular target with minimal holding steps. This work developed and compared six different 3-step STP strategies, combining membrane adsorbers, monoliths, and resins, to purify a large, complex, and labile glycoprotein from Chinese hamster ovary cell culture supernatant. The best performing pathway was cation exchange chromatography to hydrophobic interaction chromatography to affinity chromatography with an overall product recovery of up to 88% across the process and significant clearance of DNA and protein impurities. This work establishes a platform and considerations for the development of STP of biopharmaceutical products and highlights its suitability for integration with single-use technologies and continuous production methods. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:931-940, 2017. © 2017 American Institute of Chemical Engineers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Havasy, C.K.; Quach, T.K.; Bozada, C.A.
1995-12-31
This work is the development of a single-layer integrated-metal field effect transistor (SLIMFET) process for a high performance 0.2 {mu}m AlGaAs/InGaAs pseudomorphic high electron mobility transistor (PHEMT). This process is compatible with MMIC fabrication and minimizes process variations, cycle time, and cost. This process uses non-alloyed ohmic contacts, a selective gate-recess etching process, and a single gate/source/drain metal deposition step to form both Schottky and ohmic contacts at the same time.
Bahillo, Jose; Jané, Luis; Bortolotto, Tissiana; Krejci, Ivo; Roig, Miguel
2014-10-01
Loss of tooth substance has become a common pathology in modern society. It is of multifactorial origin, may be induced by a chemical process or by excessive attrition, and frequently has a combined etiology. Particular care should be taken when diagnosing the cause of dental tissue loss, in order to minimize its impact. Several publications have proposed the use of minimally invasive procedures to treat such patients in preference to traditional full-crown rehabilitation. The use of composite resins, in combination with improvements in dental adhesion, allows a more conservative approach. In this paper, we describe the step-by-step procedure of full-mouth composite rehabilitation with v-shaped veneers and ultra-thin computer-aided design/computer-assisted manufacture (CAD/CAM)- generated composite overlays in a young patient with a combination of erosion and attrition disorder.
Image denoising by a direct variational minimization
NASA Astrophysics Data System (ADS)
Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan
2011-12-01
In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.
Process Waste Assessment Machine and Fabrication Shop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, N.M.
1993-03-01
This Process Waste Assessment was conducted to evaluate hazardous wastes generated in the Machine and Fabrication Shop at Sandia National Laboratories, Bonding 913, Room 119. Spent machine coolant is the major hazardous chemical waste generated in this facility. The volume of spent coolant generated is approximately 150 gallons/month. It is sent off-site to a recycler, but a reclaiming system for on-site use is being investigated. The Shop`s line management considers hazardous waste minimization very important. A number of steps have already been taken to minimize wastes, including replacement of a hazardous solvent with biodegradable, non-caustic solution and filtration unit; wastemore » segregation; restriction of beryllium-copper alloy machining; and reduction of lead usage.« less
Durability Enhancement of a Microelectromechanical System-Based Liquid Droplet Lens
NASA Astrophysics Data System (ADS)
Kyoo Lee, June; Park, Kyung-Woo; Kim, Hak-Rin; Kong, Seong Ho
2010-06-01
In this paper, we propose methods to enhance the durability of a microelectromechanical system (MEMS)-based liquid droplet lens driven by electrowetting. The enhanced durability of the lens is achieved through not only improvement in quality of dielectric layer for electrowetting by minimizing concentration of coarse pinholes, but also mitigation of physical and electrostatic stresses by reforming lens cavity. Silicon dioxide layer is deposited using plasma enhanced chemical vapor deposition, splitting the process into several steps to minimize the pinhole concentration in the oxide layer. And the stresses-reduced cavity in a form of overturned tetra-angular truncated pyramid with rounded corners, which is based on simulated results, is proposed and realized using silicon wet etching processes combined into anisotropic and isotropic etching.
NASA Astrophysics Data System (ADS)
Li, Ning; Habuka, Hitoshi; Ikeda, Shin-ichi; Hara, Shiro
A chemical vapor deposition reactor for producing thin silicon films was designed and developed for achieving a new electronic device production system, the Minimal Manufacturing, using a half-inch wafer. This system requires a rapid process by a small footprint reactor. This was designed and verified by employing the technical issues, such as (i) vertical gas flow, (ii) thermal operation using a highly concentrated infrared flux, and (iii) reactor cleaning by chlorine trifluoride gas. The combination of (i) and (ii) could achieve a low heating power and a fast cooling designed by the heat balance of the small wafer placed at a position outside of the reflector. The cleaning process could be rapid by (iii). The heating step could be skipped because chlorine trifluoride gas was reactive at any temperature higher than room temperature.
Process Waste Assessment - Paint Shop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, N.M.
1993-06-01
This Process Waste Assessment was conducted to evaluate hazardous wastes generated in the Paint Shop, Building 913, Room 130. Special attention is given to waste streams generated by the spray painting process because it requires a number of steps for preparing, priming, and painting an object. Also, the spray paint booth covers the largest area in R-130. The largest and most costly waste stream to dispose of is {open_quote}Paint Shop waste{close_quotes} -- a combination of paint cans, rags, sticks, filters, and paper containers. These items are compacted in 55-gallon drums and disposed of as solid hazardous waste. Recommendations are mademore » for minimizing waste in the Paint Shop. Paint Shop personnel are very aware of the need to minimize hazardous wastes and are continuously looking for opportunities to do so.« less
Zero Liquid Discharge (ZLD) System for Flue-Gas Derived Water From Oxy-Combustion Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivaram Harendra; Danylo Oryshchyn; Thomas Ochs
2011-10-16
Researchers at the National Energy Technology Laboratory (NETL) located in Albany, Oregon, have patented a process - Integrated Pollutant Removal (IPR) that uses off-the-shelf technology to produce a sequestration ready CO{sub 2} stream from an oxy-combustion power plant. Capturing CO{sub 2} from fossil-fuel combustion generates a significant water product which can be tapped for use in the power plant and its peripherals. Water condensed in the IPR{reg_sign} process may contain fly ash particles, sodium (from pH control), and sulfur species, as well as heavy metals, cations and anions. NETL is developing a treatment approach for zero liquid discharge while maximizingmore » available heat from IPR. Current treatment-process steps being studied are flocculation/coagulation, for removal of cations and fine particles, and reverse osmosis, for anion removal as well as for scavenging the remaining cations. After reverse osmosis process steps, thermal evaporation and crystallization steps will be carried out in order to build the whole zero liquid discharge (ZLD) system for flue-gas condensed wastewater. Gypsum is the major product from crystallization process. Fast, in-line treatment of water for re-use in IPR seems to be one practical step for minimizing water treatment requirements for CO{sub 2} capture. The results obtained from above experiments are being used to build water treatment models.« less
Modified Unzipping Technique to Prepare Graphene Nano-Sheets
NASA Astrophysics Data System (ADS)
Al-Tamimi, B. H.; Farid, S. B. H.; Chyad, F. A.
2018-05-01
Graphene nano-sheets have been prepared via unzipping approach of multiwall carbon nanotubes (MWCNTs). The method includes two chemical-steps, in which a multi-parameter oxidation step is performed to achieve unzipping the carbon nanotubes. Then, a reduction step is carried out to achieve the final graphene nano-sheets. In the oxidation step, the oxidant material was minimized and balanced with longer curing time. This modification is made in order to reduce the oxygen-functional groups at the ends of graphene basal planes, which reduce its electrical conductivity. In addition, a similar adjustment is achieved in the reduction step, i.e. the consumed chemicals is reduced which make the overall process more economic and eco-friendly. The prepared nano-sheets were characterized by atomic force microscopy, scanning electron microscopy, and Raman spectroscopy. The average thickness of the prepared graphene was about 5.23 nm.
Snyman, Celia; Elliott, Edith
2011-12-15
The hanging drop three-dimensional culture technique allows cultivation of functional three-dimensional mammary constructs without exogenous extracellular matrix. The fragile acini are, however, difficult to preserve during processing steps for advanced microscopic investigation. We describe adaptations to the protocol for handling of hanging drop cultures to include investigation using confocal, scanning, and electron microscopy, with minimal loss of cell culture components. Copyright © 2011 Elsevier Inc. All rights reserved.
Homogeneity of Gd-based garnet transparent ceramic scintillators for gamma spectroscopy
NASA Astrophysics Data System (ADS)
Seeley, Z. M.; Cherepy, N. J.; Payne, S. A.
2013-09-01
Transparent polycrystalline ceramic scintillators based on the composition Gd1.49Y1.49Ce0.02Ga2.2Al2.8O12 are being developed for gamma spectroscopy detectors. Scintillator light yield and energy resolution depend on the details of various processing steps, including powder calcination, green body formation, and sintering atmosphere. We have found that gallium sublimation during vacuum sintering creates compositional gradients in the ceramic and can degrade the energy resolution. While sintering in oxygen produces ceramics with uniform composition and little afterglow, light yields are reduced, compared to vacuum sintering. By controlling the atmosphere during the various process steps, we were able to minimize the gallium sublimation, resulting in a more homogeneous composition and improved gamma spectroscopy performance.
Solar array stepping to minimize array excitation
NASA Technical Reports Server (NTRS)
Bhat, Mahabaleshwar K. P. (Inventor); Liu, Tung Y. (Inventor); Plescia, Carl T. (Inventor)
1989-01-01
Mechanical oscillations of a mechanism containing a stepper motor, such as a solar-array powered spacecraft, are reduced and minimized by the execution of step movements in pairs of steps, the period between steps being equal to one-half of the period of torsional oscillation of the mechanism. Each pair of steps is repeated at needed intervals to maintain desired continuous movement of the portion of elements to be moved, such as the solar array of a spacecraft. In order to account for uncertainty as well as slow change in the period of torsional oscillation, a command unit may be provided for varying the interval between steps in a pair.
Ultramap: the all in One Photogrammetric Solution
NASA Astrophysics Data System (ADS)
Wiechert, A.; Gruber, M.; Karner, K.
2012-07-01
This paper describes in detail the dense matcher developed since years by Vexcel Imaging in Graz for Microsoft's Bing Maps project. This dense matcher was exclusively developed for and used by Microsoft for the production of the 3D city models of Virtual Earth. It will now be made available to the public with the UltraMap software release mid-2012. That represents a revolutionary step in digital photogrammetry. The dense matcher generates digital surface models (DSM) and digital terrain models (DTM) automatically out of a set of overlapping UltraCam images. The models have an outstanding point density of several hundred points per square meter and sub-pixel accuracy and are generated automatically. The dense matcher consists of two steps. The first step rectifies overlapping image areas to speed up the dense image matching process. This rectification step ensures a very efficient processing and detects occluded areas by applying a back-matching step. In this dense image matching process a cost function consisting of a matching score as well as a smoothness term is minimized. In the second step the resulting range image patches are fused into a DSM by optimizing a global cost function. The whole process is optimized for multi-core CPUs and optionally uses GPUs if available. UltraMap 3.0 features also an additional step which is presented in this paper, a complete automated true-ortho and ortho workflow. For this, the UltraCam images are combined with the DSM or DTM in an automated rectification step and that results in high quality true-ortho or ortho images as a result of a highly automated workflow. The paper presents the new workflow and first results.
Yuan, Dandan; Tian, Lei; Li, Zhida; Jiang, Hong; Yan, Chao; Dong, Jing; Wu, Hongjun; Wang, Baohui
2018-02-15
Herein, we report the solar thermal electrochemical process (STEP) aniline oxidation in wastewater for totally solving the two key obstacles of the huge energy consumption and passivation film in the electrochemical treatment. The process, fully driven by solar energy without input of any other energies, sustainably serves as an efficient thermoelectrochemical oxidation of aniline by the control of the thermochemical and electrochemical coordination. The thermocoupled electrochemical oxidation of aniline achieved a fast rate and high efficiency for the full minimization of aniline to CO 2 with the stability of the electrode and without formation of polyaniline (PAN) passivation film. A clear mechanism of aniline oxidation indicated a switching of the reactive pathway by the STEP process. Due to the coupling of solar thermochemistry and electrochemistry, the electrochemical current remained stable, significantly improving the oxidation efficiency and mineralization rate by apparently decreasing the electrolytic potential when applied with high temperature. The oxidation rate of aniline and chemical oxygen demand (COD) removal rate could be lifted up to 2.03 and 2.47 times magnification compared to conventional electrolysis, respectively. We demonstrate that solar-driven STEP processes are capable of completely mineralizing aniline with high utilization of solar energy. STEP aniline oxidation can be utilized as a green, sustainable water treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J
Purpose: Metal objects create severe artifacts in kilo-voltage (kV) CT image reconstructions due to the high attenuation coefficients of high atomic number objects. Most of the techniques devised to reduce this artifact utilize a two-step approach, which do not reliably yield the qualified reconstructed images. Thus, for accuracy and simplicity, this work presents a one-step reconstruction method based on a modified penalized weighted least-squares (PWLS) technique. Methods: Existing techniques for metal artifact reduction mostly adopt a two-step approach, which conduct additional reconstruction with the modified projection data from the initial reconstruction. This procedure does not consistently perform well due tomore » the uncertainties in manipulating the metal-contaminated projection data by thresholding and linear interpolation. This study proposes a one-step reconstruction process using a new PWLS operation with total-variation (TV) minimization, while not manipulating the projection. The PWLS for CT reconstruction has been investigated using a pre-defined weight, based on the variance of the projection datum at each detector bin. It works well when reconstructing CT images from metal-free projection data, which does not appropriately penalize metal-contaminated projection data. The proposed work defines the weight at each projection element under the assumption of a Poisson random variable. This small modification using element-wise penalization has a large impact in reducing metal artifacts. For evaluation, the proposed technique was assessed with two noisy, metal-contaminated digital phantoms, against the existing PWLS with TV minimization and the two-step approach. Result: The proposed PWLS with TV minimization greatly improved the metal artifact reduction, relative to the other techniques, by watching the results. Numerically, the new approach lowered the normalized root-mean-square error about 30 and 60% for the two cases, respectively, compared to the two-step method. Conclusion: A new PWLS operation shows promise for improving metal artifact reduction in CT imaging, as well as simplifying the reconstructing procedure.« less
Scalable and balanced dynamic hybrid data assimilation
NASA Astrophysics Data System (ADS)
Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa
2017-04-01
Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.
Andersen, Lau M.
2018-01-01
An important aim of an analysis pipeline for magnetoencephalographic data is that it allows for the researcher spending maximal effort on making the statistical comparisons that will answer the questions of the researcher, while in turn spending minimal effort on the intricacies and machinery of the pipeline. I here present a set of functions and scripts that allow for setting up a clear, reproducible structure for separating raw and processed data into folders and files such that minimal effort can be spend on: (1) double-checking that the right input goes into the right functions; (2) making sure that output and intermediate steps can be accessed meaningfully; (3) applying operations efficiently across groups of subjects; (4) re-processing data if changes to any intermediate step are desirable. Applying the scripts requires only general knowledge about the Python language. The data analyses are neural responses to tactile stimulations of the right index finger in a group of 20 healthy participants acquired from an Elekta Neuromag System. Two analyses are presented: going from individual sensor space representations to, respectively, an across-group sensor space representation and an across-group source space representation. The processing steps covered for the first analysis are filtering the raw data, finding events of interest in the data, epoching data, finding and removing independent components related to eye blinks and heart beats, calculating participants' individual evoked responses by averaging over epoched data and calculating a grand average sensor space representation over participants. The second analysis starts from the participants' individual evoked responses and covers: estimating noise covariance, creating a forward model, creating an inverse operator, estimating distributed source activity on the cortical surface using a minimum norm procedure, morphing those estimates onto a common cortical template and calculating the patterns of activity that are statistically different from baseline. To estimate source activity, processing of the anatomy of subjects based on magnetic resonance imaging is necessary. The necessary steps are covered here: importing magnetic resonance images, segmenting the brain, estimating boundaries between different tissue layers, making fine-resolution scalp surfaces for facilitating co-registration, creating source spaces and creating volume conductors for each subject. PMID:29403349
Andersen, Lau M
2018-01-01
An important aim of an analysis pipeline for magnetoencephalographic data is that it allows for the researcher spending maximal effort on making the statistical comparisons that will answer the questions of the researcher, while in turn spending minimal effort on the intricacies and machinery of the pipeline. I here present a set of functions and scripts that allow for setting up a clear, reproducible structure for separating raw and processed data into folders and files such that minimal effort can be spend on: (1) double-checking that the right input goes into the right functions; (2) making sure that output and intermediate steps can be accessed meaningfully; (3) applying operations efficiently across groups of subjects; (4) re-processing data if changes to any intermediate step are desirable. Applying the scripts requires only general knowledge about the Python language. The data analyses are neural responses to tactile stimulations of the right index finger in a group of 20 healthy participants acquired from an Elekta Neuromag System. Two analyses are presented: going from individual sensor space representations to, respectively, an across-group sensor space representation and an across-group source space representation. The processing steps covered for the first analysis are filtering the raw data, finding events of interest in the data, epoching data, finding and removing independent components related to eye blinks and heart beats, calculating participants' individual evoked responses by averaging over epoched data and calculating a grand average sensor space representation over participants. The second analysis starts from the participants' individual evoked responses and covers: estimating noise covariance, creating a forward model, creating an inverse operator, estimating distributed source activity on the cortical surface using a minimum norm procedure, morphing those estimates onto a common cortical template and calculating the patterns of activity that are statistically different from baseline. To estimate source activity, processing of the anatomy of subjects based on magnetic resonance imaging is necessary. The necessary steps are covered here: importing magnetic resonance images, segmenting the brain, estimating boundaries between different tissue layers, making fine-resolution scalp surfaces for facilitating co-registration, creating source spaces and creating volume conductors for each subject.
Sanaie, Nooshafarin; Cecchini, Douglas; Pieracci, John
2012-10-01
Micro-scale chromatography formats are becoming more routinely used in purification process development because of their ability to rapidly screen large number of process conditions at a time with minimal material. Given the usual constraints that exist on development timelines and resources, these systems can provide a means to maximize process knowledge and process robustness compared to traditional packed column formats. In this work, a high-throughput, 96-well filter plate format was used in the development of the cation exchange and hydrophobic interaction chromatography steps of a purification process designed to alter the glycoform distribution of a small protein. The significant input parameters affecting process performance were rapidly identified for both steps and preliminary operating conditions were identified. These ranges were verified in a packed chromatography column in order to assess the ability of the 96-well plate to predict packed column performance. In both steps, the 96-well plate format consistently led to underestimated glycoform-enrichment levels and to overestimated product recovery rates compared to the column-based approach. These studies demonstrate that the plate format can be used as a screening tool to narrow the operating ranges prior to further optimization on packed chromatography columns. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
von Grote, Erika C; Palaniswarmy, Kiruthi; Meckfessel, Matthew H
2016-12-01
Occupational irritant contact dermatitis (ICD) affecting the hands is a common and difficult-to-manage condition. Occupations that necessitate contact with harsh chemicals, use of alcohol-based disinfectants, and frequent hand washing elevate the risk of ICD. Management strategies that do not adequately prevent accumulated damage and repair skin, can develop into chronic dermatoses which negatively impact work productivity and quality of life. A 2-step skin-care regimen (Excipial Daily Protection Hand Cream (EP) and Excipial Rapid Repair Hand Cream (ER), Galderma Laboratories, L.P.) has been developed as a daily-use management strategy to protect and repair vulnerable hands. The protective barrier cream is formulated with aluminum chlorohydrate and designed for pre-exposure application to enhance the skin's natural protective barrier and minimize excessive moisture while wearing protective gloves. The repair cream, a lipid-rich formulation, is intended for post-exposure application to rehydrate and facilitate the skin's natural healing process. The results of 3 clinical studies highlighted in this review demonstrate how the use of a 2-step skin-care regimen offers a greater protective effect against ICD than the use of barrier cream alone, and also how the formulation of the barrier cream used in these studies helps minimize the occlusion effect caused by gloves and does not interfere with the antibacterial efficacy of an alcohol-based hand sanitizer. This 2-step skin-care regimen is effectively designed to manage and minimize the risk of ICD development in a variety of patients and provides clinicians an additional tool for helping patients manage ICD. J Drugs Dermatol. 2016;15(12):1504-1510.
Safety in the Chemical Laboratory: Flood Control.
ERIC Educational Resources Information Center
Pollard, Bruce D.
1983-01-01
Describes events leading to a flood in the Wehr Chemistry Laboratory at Marquette University, discussing steps taken to minimize damage upon discovery. Analyzes the problem of flooding in the chemical laboratory and outlines seven steps of flood control: prevention; minimization; early detection; stopping the flood; evaluation; clean-up; and…
Practicing safe cell culture: applied process designs for minimizing virus contamination risk.
Kiss, Robert D
2011-01-01
CONFERENCE PROCEEDING Proceedings of the PDA/FDA Adventitious Viruses in Biologics: Detection and Mitigation Strategies Workshop in Bethesda, MD, USA; December 1-3, 2010 Guest Editors: Arifa Khan (Bethesda, MD), Patricia Hughes (Bethesda, MD) and Michael Wiebe (San Francisco, CA) Genentech responded to a virus contamination in its biologics manufacturing facility by developing and implementing a series of barriers specifically designed to prevent recurrence of this significant and impactful event. The barriers included steps to inactivate or remove potential virus particles from the many raw materials used in cell culture processing. Additionally, analytical testing barriers provided protection of the downstream processing areas should a culture contamination occur, and robust virus clearance capability provided further assurance of virus safety should a low level contamination go undetected. This conference proceeding will review Genentech's approach, and lessons learned, in minimizing virus contamination risk in cell culture processes through multiple layers of targeted barriers designed to deliver biologics products with high success rates.
Examination of the steps leading up to the physical developer process for developing fingerprints.
Wilson, Jeffrey Daniel; Cantu, Antonio A; Antonopoulos, George; Surrency, Marc J
2007-03-01
This is a systematic study that examines several acid prewashes and water rinses on paper bearing latent prints before its treatment with a silver physical developer. Specimens or items processed with this method are usually pretreated with an acid wash to neutralize calcium carbonate from the paper before the treatment with a physical developer. Two different acids at varying concentrations were tested on fingerprints. Many different types of paper were examined in order to determine which acid prewash was the most beneficial. Various wash times as well as the addition of a water rinse step before the development were also examined. A pH study was included that monitored the acidity of the solution during the wash step. Scanning electron microscopy was used to verify surface calcium levels for the paper samples throughout the experiment. Malic acid at a concentration of 2.5% proved to be an ideal acid for most papers, providing good fingerprint development with minimal background development. Water rinses were deemed unnecessary before physical development.
Saving Material with Systematic Process Designs
NASA Astrophysics Data System (ADS)
Kerausch, M.
2011-08-01
Global competition is forcing the stamping industry to further increase quality, to shorten time-to-market and to reduce total cost. Continuous balancing between these classical time-cost-quality targets throughout the product development cycle is required to ensure future economical success. In today's industrial practice, die layout standards are typically assumed to implicitly ensure the balancing of company specific time-cost-quality targets. Although die layout standards are a very successful approach, there are two methodical disadvantages. First, the capabilities for tool design have to be continuously adapted to technological innovations; e.g. to take advantage of the full forming capability of new materials. Secondly, the great variety of die design aspects have to be reduced to a generic rule or guideline; e.g. binder shape, draw-in conditions or the use of drawbeads. Therefore, it is important to not overlook cost or quality opportunities when applying die design standards. This paper describes a systematic workflow with focus on minimizing material consumption. The starting point of the investigation is a full process plan for a typical structural part. All requirements are definedaccording to a predefined set of die design standards with industrial relevance are fulfilled. In a first step binder and addendum geometry is systematically checked for material saving potentials. In a second step, blank shape and draw-in are adjusted to meet thinning, wrinkling and springback targets for a minimum blank solution. Finally the identified die layout is validated with respect to production robustness versus splits, wrinkles and springback. For all three steps the applied methodology is based on finite element simulation combined with a stochastical variation of input variables. With the proposed workflow a well-balanced (time-cost-quality) production process assuring minimal material consumption can be achieved.
Koidis, Anastasios; Rawson, Ashish; Tuohy, Maria; Brunton, Nigel
2012-06-01
Carrots and parsnips are often consumed as minimally processed ready-to-eat convenient foods and contain in minor quantities, bioactive aliphatic C17-polyacetylenes (falcarinol, falcarindiol, falcarindiol-3-acetate). Their retention during minimal processing in an industrial trial was evaluated. Carrot and parsnips were prepared in four different forms (disc cutting, baton cutting, cubing and shredding) and samples were taken in every point of their processing line. The unit operations were: peeling, cutting and washing with chlorinated water and also retention during 7days storage was evaluated. The results showed that the initial unit operations (mainly peeling) influence the polyacetylene retention. This was attributed to the high polyacetylene content of their peels. In most cases, when washing was performed after cutting, less retention was observed possibly due to leakage during tissue damage occurred in the cutting step. The relatively high retention during storage indicates high plant matrix stability. Comparing the behaviour of polyacetylenes in the two vegetables during storage, the results showed that they were slightly more retained in parsnips than in carrots. Unit operations and especially abrasive peeling might need further optimisation to make them gentler and minimise bioactive losses. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rowland, David J.; Biteen, Julie S.
2017-04-01
Single-molecule super-resolution imaging and tracking can measure molecular motions inside living cells on the scale of the molecules themselves. Diffusion in biological systems commonly exhibits multiple modes of motion, which can be effectively quantified by fitting the cumulative probability distribution of the squared step sizes in a two-step fitting process. Here we combine this two-step fit into a single least-squares minimization; this new method vastly reduces the total number of fitting parameters and increases the precision with which diffusion may be measured. We demonstrate this Global Fit approach on a simulated two-component system as well as on a mixture of diffusing 80 nm and 200 nm gold spheres to show improvements in fitting robustness and localization precision compared to the traditional Local Fit algorithm.
A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map
NASA Astrophysics Data System (ADS)
Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad
2016-06-01
In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.
NASA Astrophysics Data System (ADS)
Tippawan, Phanicha; Arpornwichanop, Amornchai
2016-02-01
The hydrogen production process is known to be important to a fuel cell system. In this study, a carbon-free hydrogen production process is proposed by using a two-step ethanol-steam-reforming procedure, which consists of ethanol dehydrogenation and steam reforming, as a fuel processor in the solid oxide fuel cell (SOFC) system. An addition of CaO in the reformer for CO2 capture is also considered to enhance the hydrogen production. The performance of the SOFC system is analyzed under thermally self-sufficient conditions in terms of the technical and economic aspects. The simulation results show that the two-step reforming process can be run in the operating window without carbon formation. The addition of CaO in the steam reformer, which runs at a steam-to-ethanol ratio of 5, temperature of 900 K and atmospheric pressure, minimizes the presence of CO2; 93% CO2 is removed from the steam-reforming environment. This factor causes an increase in the SOFC power density of 6.62%. Although the economic analysis shows that the proposed fuel processor provides a higher capital cost, it offers a reducing active area of the SOFC stack and the most favorable process economics in term of net cost saving.
Treatments To Produce Stabilized Aluminum Mirrors for Cryogenic Uses
NASA Technical Reports Server (NTRS)
Zewari, Wahid; Barthelmy, Michael; Ohl, Raymond
2005-01-01
Five metallurgical treatments have been tested as means of stabilizing mirrors that are made of aluminum alloy 6061 and are intended for use in cryogenic applications. Aluminum alloy 6061 is favored as a mirror material by many scientists and engineers. Like other alloys, it shrinks upon cool-down from room temperature to cryogenic temperature. This shrinkage degrades the optical quality of the mirror surfaces. Hence, the metallurgical treatments were tested to determine which one could be most effective in minimizing the adverse optical effects of cooldown to cryogenic temperatures. Each of the five metallurgical treatments comprises a multistep process, the steps of which are interspersed with the steps of the mirror-fabrication process. The five metallurgical-treatment/fabrication.- process combinations were compared with each other and with a benchmark fabrication process, in which a mirror is made from an alloy blank by (1) symmetrical rough machining, (2) finish machining to within 0.006 in. (. 0.15 mm) of final dimensions, and finally (3) diamond turning to a mirror finish.
Introduction to Exide Corporations`s high temperature metals recovery system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozelle, P.L.; Baranski, J.P.; Bitler, J.A.
1995-12-31
Environmental strategies concerning the processing and ultimate fate of wastes and byproducts are of ever increasing importance to the public and business sectors in the world today. Recycling materials and reusing energy from wastes and byproducts results in a reduction of environmental impacts and the cost of disposal. These are the key steps in reaching the ultimate goal of waste minimization. In response to these needs, Exide Corporation, in its vision to develop waste minimization programs, has developed the Exide High Temperature Metals Recovery (EHTMR) process. This process can treat a variety of wastes and byproducts where metals contents aremore » an issue, recover the metal values for reuse, and produce a metals-depleted slag that can be marketable under the most stringent proposed EPA regulations for leachability of contaminants. The central feature of the EHTMR process is the exposure of treated materials to a transferred arc plasma generated in an electric furnace. The process achieves a reduction in costs and liability by recovering portions of a waste that can be recycled or reclaimed and produces a slag that has beneficial use to society.« less
Martini, Marinna A.; Sherwood, Chris; Horwitz, Rachel; Ramsey, Andree; Lightsom, Fran; Lacy, Jessie; Xu, Jingping
2006-01-01
3.\tpreserving minimally processed and partially processed versions of data sets. STG usually deploys ADV and PCADP probes configured as downward looking, mounted on bottom tripods, with the objective of measuring high-resolution near-bed currents. The velocity profiles are recorded with minimal internal data processing. Also recorded are parameters such as temperature, conductivity, optical backscatter, light transmission, and high frequency pressure. Sampling consists of high-frequency bursts(1–10 Hz) bursts of long duration (5–30 minutes) at regular and recurring intervals for a duration of 1 to 6 months. The result is very large data files, often 500 MB per Hydra, per deployment, in Sontek's compressed binary format. This section introduces the Hydratools toolbox and provides information about the history of the system's development. The USGS philosophy regarding data quality is discussed to provide an understating of the motivation for creating the system. General information about the following topics will also be discussed: hardware and software required for the system, basic processing steps, limitations of program usage, and features that are unique to the programs.
Student’s thinking process in solving word problems in geometry
NASA Astrophysics Data System (ADS)
Khasanah, V. N.; Usodo, B.; Subanti, S.
2018-05-01
This research aims to find out the thinking process of seventh grade of Junior High School in solve word problem solving of geometry. This research was descriptive qualitative research. The subject of the research was selected based on sex and differences in mathematical ability. Data collection was done based on student’s work test, interview, and observation. The result of the research showed that there was no difference of thinking process between male and female with high mathematical ability, and there were differences of thinking process between male and female with moderate and low mathematical ability. Also, it was found that male with moderate mathematical ability took a long time in the step of making problem solving plans. While female with moderate mathematical ability took a long time in the step of understanding the problems. The importance of knowing the thinking process of students in solving word problem solving were that the teacher knows the difficulties faced by students and to minimize the occurrence of the same error in problem solving. Teacher could prepare the right learning strategies which more appropriate with student’s thinking process.
Photoresist removal using gaseous sulfur trioxide cleaning technology
NASA Astrophysics Data System (ADS)
Del Puppo, Helene; Bocian, Paul B.; Waleh, Ahmad
1999-06-01
A novel cleaning method for removing photoresists and organic polymers from semiconductor wafers is described. This non-plasma method uses anhydrous sulfur trioxide gas in a two-step process, during which, the substrate is first exposed to SO3 vapor at relatively low temperatures and then is rinsed with de-ionized water. The process is radically different from conventional plasma-ashing methods in that the photoresist is not etched or removed during the exposure to SO3. Rather, the removal of the modified photoresist takes place during the subsequent DI-water rinse step. The SO3 process completely removes photoresist and polymer residues in many post-etch applications. Additional advantages of the process are absence of halogen gases and elimination of the need for other solvents and wet chemicals. The process also enjoys a very low cost of ownership and has minimal environmental impact. The SEM and SIMS surface analysis results are presented to show the effectiveness of gaseous SO3 process after polysilicon, metal an oxide etch applications. The effects of both chlorine- and fluorine-based plasma chemistries on resist removal are described.
Early Success Is Vital in Minimal Worksite Wellness Interventions at Small Worksites
ERIC Educational Resources Information Center
Ablah, Elizabeth; Dong, Frank; Konda, Kurt; Konda, Kelly; Armbruster, Sonja; Tuttle, Becky
2015-01-01
Intervention: In an effort to increase physical activity, 15 workplaces participated in a minimal-contact 10,000-steps-a-day program sponsored by the Sedgwick County Health Department in 2007 and 2008. Pedometers were provided to measure participants' weekly steps for the 10-week intervention. Method: Participants were defined as those who…
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research. PMID:27774054
Optimal design of the satellite constellation arrangement reconfiguration process
NASA Astrophysics Data System (ADS)
Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid
2016-08-01
In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.
Dumay-Odelot, Hélène; Durrieu-Gaillard, Stéphanie; El Ayoubi, Leyla; Parrot, Camila; Teichmann, Martin
2014-01-01
Human RNA polymerase III transcribes small untranslated RNAs that contribute to the regulation of essential cellular processes, including transcription, RNA processing and translation. Analysis of this transcription system by in vitro transcription techniques has largely contributed to the discovery of its transcription factors and to the understanding of the regulation of human RNA polymerase III transcription. Here we review some of the key steps that led to the identification of transcription factors and to the definition of minimal promoter sequences for human RNA polymerase III transcription. PMID:25764111
Zhang, Zhechun; Goldtzvik, Yonathan; Thirumalai, D
2017-11-14
Kinesin walks processively on microtubules (MTs) in an asymmetric hand-over-hand manner consuming one ATP molecule per 16-nm step. The individual contributions due to docking of the approximately 13-residue neck linker to the leading head (deemed to be the power stroke) and diffusion of the trailing head (TH) that contributes in propelling the motor by 16 nm have not been quantified. We use molecular simulations by creating a coarse-grained model of the MT-kinesin complex, which reproduces the measured stall force as well as the force required to dislodge the motor head from the MT, to show that nearly three-quarters of the step occurs by bidirectional stochastic motion of the TH. However, docking of the neck linker to the leading head constrains the extent of diffusion and minimizes the probability that kinesin takes side steps, implying that both the events are necessary in the motility of kinesin and for the maintenance of processivity. Surprisingly, we find that during a single step, the TH stochastically hops multiple times between the geometrically accessible neighboring sites on the MT before forming a stable interaction with the target binding site with correct orientation between the motor head and the [Formula: see text] tubulin dimer.
PROCESS USING BISMUTH PHOSPHATE AS A CARRIER PRECIPITATE FOR FISSION PRODUCTS AND PLUTONIUM VALUES
Finzel, T.G.
1959-03-10
A process is described for separating plutonium from fission products carried therewith when plutonium in the reduced oxidation state is removed from a nitric acid solution of irradiated uranium by means of bismuth phosphate as a carrier precipitate. The bismuth phosphate carrier precipitate is dissolved by treatment with nitric acid and the plutonium therein is oxidized to the hexavalent oxidation state by means of potassium dichromate. Separation of the plutonium from the fission products is accomplished by again precipitating bismuth phosphate and removing the precipitate which now carries the fission products and a small percentage of the plutonium present. The amount of plutonium carried in this last step may be minimized by addition of sodium fluoride, so as to make the solution 0.03N in NaF, prior to the oxidation and prccipitation step.
Chained Kullback-Leibler Divergences
Pavlichin, Dmitri S.; Weissman, Tsachy
2017-01-01
We define and characterize the “chained” Kullback-Leibler divergence minw D(p‖w) + D(w‖q) minimized over all intermediate distributions w and the analogous k-fold chained K-L divergence min D(p‖wk−1) + … + D(w2‖w1) + D(w1‖q) minimized over the entire path (w1,…,wk−1). This quantity arises in a large deviations analysis of a Markov chain on the set of types – the Wright-Fisher model of neutral genetic drift: a population with allele distribution q produces offspring with allele distribution w, which then produce offspring with allele distribution p, and so on. The chained divergences enjoy some of the same properties as the K-L divergence (like joint convexity in the arguments) and appear in k-step versions of some of the same settings as the K-L divergence (like information projections and a conditional limit theorem). We further characterize the optimal k-step “path” of distributions appearing in the definition and apply our findings in a large deviations analysis of the Wright-Fisher process. We make a connection to information geometry via the previously studied continuum limit, where the number of steps tends to infinity, and the limiting path is a geodesic in the Fisher information metric. Finally, we offer a thermodynamic interpretation of the chained divergence (as the rate of operation of an appropriately defined Maxwell’s demon) and we state some natural extensions and applications (a k-step mutual information and k-step maximum likelihood inference). We release code for computing the objects we study. PMID:29130024
High-quality compressive ghost imaging
NASA Astrophysics Data System (ADS)
Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun
2018-04-01
We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.
The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI
Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain
2018-01-01
Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg). Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency. PMID:29497372
The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI.
Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain
2018-01-01
Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg) . Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.
Ngodock, Hans; Carrier, Matthew; Fabre, Josette; Zingarelli, Robert; Souopgui, Innocent
2017-07-01
This study presents the theoretical framework for variational data assimilation of acoustic pressure observations into an acoustic propagation model, namely, the range dependent acoustic model (RAM). RAM uses the split-step Padé algorithm to solve the parabolic equation. The assimilation consists of minimizing a weighted least squares cost function that includes discrepancies between the model solution and the observations. The minimization process, which uses the principle of variations, requires the derivation of the tangent linear and adjoint models of the RAM. The mathematical derivations are presented here, and, for the sake of brevity, a companion study presents the numerical implementation and results from the assimilation simulated acoustic pressure observations.
Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.
Schoene, Daniel; Delbaere, Kim; Lord, Stephen R
2017-08-01
Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean
2014-01-01
Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT). © 2014 American Institute of Chemical Engineers.
Gajendragadkar, Chinmay N; Gogate, Parag R
2016-09-01
The current review focuses on the analysis of different aspects related to intensified recovery of possible valuable products from cheese whey using ultrasound. Ultrasound can be used for process intensification in processing steps such as pre-treatment, ultrafiltration, spray drying and crystallization. The combination of low-frequency, high intensity ultrasound with the pre-heat treatment minimizes the thickening or gelling of protein containing whey solutions. These characteristics of whey after the ultrasound assisted pretreatment helps in improving the efficacy of ultrafiltration used for separation and also helps in preventing the blockage of orifice of spray dryer atomizing device. Further, the heat stability of whey proteins is increased. In the subsequent processing step, use of ultrasound assisted atomization helps to reduce the treatment times as well as yield better quality whey protein concentrate (WPC) powder. After the removal of proteins from the whey, lactose is a major constituent remaining in the solution which can be efficiently recovered by sonocrystallization based on the use of anti-solvent as ethanol. The scale-up parameters to be considered during designing the process for large scale applications are also discussed along with analysis of various reactor designs. Overall, it appears that use of ultrasound can give significant process intensification benefits that can be harnessed even at commercial scale applications. Copyright © 2016 Elsevier B.V. All rights reserved.
Immobilization techniques to avoid enzyme loss from oxidase-based biosensors: a one-year study.
House, Jody L; Anderson, Ellen M; Ward, W Kenneth
2007-01-01
Continuous amperometric sensors that measure glucose or lactate require a stable sensitivity, and glutaraldehyde crosslinking has been used widely to avoid enzyme loss. Nonetheless, little data is published on the effectiveness of enzyme immobilization with glutaraldehyde. A combination of electrochemical testing and spectrophotometric assays was used to study the relationship between enzyme shedding and the fabrication procedure. In addition, we studied the relationship between the glutaraldehyde concentration and sensor performance over a period of one year. The enzyme immobilization process by glutaraldehyde crosslinking to glucose oxidase appears to require at least 24-hours at room temperature to reach completion. In addition, excess free glucose oxidase can be removed by soaking sensors in purified water for 20 minutes. Even with the addition of these steps, however, it appears that there is some free glucose oxidase entrapped within the enzyme layer which contributes to a decline in sensitivity over time. Although it reduces the ultimate sensitivity (probably via a change in the enzyme's natural conformation), glutaraldehyde concentration in the enzyme layer can be increased in order to minimize this instability. After exposure of oxidase enzymes to glutaraldehyde, effective crosslinking requires a rinse step and a 24-hour incubation step. In order to minimize the loss of sensor sensitivity over time, the glutaraldehyde concentration can be increased.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Protecting and landscaping homes in the wildland/urban interface
Yvonne C. Barkeley; Chris Schnepf; Jack D. Cohen
2004-01-01
This publication is designed to help you minimize the risks of losing your home from wildfire. The first step is to understand wildife and how homes are destroyed. Next, consider the fire resistiveness of your house and the surrounding landscape, and take the necessary steps to minimize your home ignition potential. After taking care of your home and immediate...
ERIC Educational Resources Information Center
Murph, Debra; McCormick, Sandra
1985-01-01
A 12-step procedure was used in teaching five minimally literate, male juvenile offenders to read and interpret prototypes of road signs displaying words, and a 5-step procedure for interpreting a sign without words. All students' correct responses in reading and interpreting signs increased and were maintained during subsequent post-checks.…
Object detection in cinematographic video sequences for automatic indexing
NASA Astrophysics Data System (ADS)
Stauder, Jurgen; Chupeau, Bertrand; Oisel, Lionel
2003-06-01
This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.
Perceptual Color Characterization of Cameras
Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo
2014-01-01
Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586
Kim, Hong-Seok; Choi, Dasom; Kang, Il-Byeong; Kim, Dong-Hyeon; Yim, Jin-Hyeok; Kim, Young-Ji; Chon, Jung-Whan; Oh, Deog-Hwan; Seo, Kun-Ho
2017-02-01
Culture-based detection of nontyphoidal Salmonella spp. in foods requires at least four working days; therefore, new detection methods that shorten the test time are needed. In this study, we developed a novel single-step Salmonella enrichment broth, SSE-1, and compared its detection capability with that of commercial single-step ONE broth-Salmonella (OBS) medium and a conventional two-step enrichment method using buffered peptone water and Rappaport-Vassiliadis soy broth (BPW-RVS). Minimally processed lettuce samples were artificially inoculated with low levels of healthy and cold-injured Salmonella Enteritidis (10 0 or 10 1 colony-forming unit/25 g), incubated in OBS, BPW-RVS, and SSE-1 broths, and streaked on xylose lysine deoxycholate (XLD) agar. Salmonella recoverability was significantly higher in BPW-RVS (79.2%) and SSE-1 (83.3%) compared to OBS (39.3%) (p < 0.05). Our data suggest that the SSE-1 single-step enrichment broth could completely replace two-step enrichment with reduced enrichment time from 48 to 24 h, performing better than commercial single-step enrichment medium in the conventional nonchromogenic Salmonella detection, thus saving time, labor, and cost.
Solid State Lighting Program (Falcon)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeks, Steven
2012-06-30
Over the past two years, KLA-Tencor and partners successfully developed and deployed software and hardware tools that increase product yield for High Brightness LED (HBLED) manufacturing and reduce product development and factory ramp times. This report summarizes our development effort and details of how the results of the Solid State Light Program (Falcon) have started to help HBLED manufacturers optimize process control by enabling them to flag and correct identified killer defect conditions at any point of origin in the process manufacturing flow. This constitutes a quantum leap in yield management over current practice. Current practice consists of die dispositioningmore » which is just rejection of bad die at end of process based upon probe tests, loosely assisted by optical in-line monitoring for gross process deficiencies. For the first time, and as a result of our Solid State Lighting Program, our LED manufacturing partners have obtained the software and hardware tools that optimize individual process steps to control killer defects at the point in the processes where they originate. Products developed during our two year program enable optimized inspection strategies for many product lines to minimize cost and maximize yield. The Solid State Lighting Program was structured in three phases: i) the development of advanced imaging modes that achieve clear separation between LED defect types, improves signal to noise and scan rates, and minimizes nuisance defects for both front end and back end inspection tools, ii) the creation of defect source analysis (DSA) software that connect the defect maps from back-end and front-end HBLED manufacturing tools to permit the automatic overlay and traceability of defects between tools and process steps, suppress nuisance defects, and identify the origin of killer defects with process step and conditions, and iii) working with partners (Philips Lumileds) on product wafers, obtain a detailed statistical correlation of automated defect and DSA map overlay to failed die identified using end product probe test results. Results from our two year effort have led to “automated end-to-end defect detection” with full defect traceability and the ability to unambiguously correlate device killer defects to optically detected features and their point of origin within the process. Success of the program can be measured by yield improvements at our partner’s facilities and new product orders.« less
Elasto-plastic flow in cracked bodies using a new finite element model. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Karabin, M. E., Jr.
1977-01-01
Cracked geometries were studied by finite element techniques with the aid of a new special element embedded at the crack tip. This model seeked to accurately represent the singular stresses and strains associated with the elasto-plastic flow process. The present model was not restricted to a material type and did not predetermine a singularity. Rather the singularity was treated as an unknown. For each step of the incremental process the nodal degrees of freedom and the unknown singularity were found through minimization of an energy-like functional. The singularity and nodal degrees of freedom were determined by means of an iterative process.
Oxidation resistant coatings for ceramic matrix composite components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaubert, V.M.; Stinton, D.P.; Hirschfeld, D.A.
Corrosion resistant Ca{sub 0.6}Mg{sub 0.4}Zr{sub 4}(PO{sub 4}){sub 6} (CMZP) and Ca{sub 0.5}Sr{sub 0.5}Zr{sub 4}(PO{sub 4}){sub 6} (CS-50) coatings for fiber-reinforced SiC-matrix composite heat exchanger tubes have been developed. Aqueous slurries of both oxides were prepared with high solids loading. One coating process consisted of dipping the samples in a slip. A tape casting process has also been created that produced relatively thin and dense coatings covering a large area. A processing technique was developed, utilizing a pre-sintering step, which produced coatings with minimal cracking.
Wideband Agile Digital Microwave Radiometer
NASA Technical Reports Server (NTRS)
Gaier, Todd C.; Brown, Shannon T.; Ruf, Christopher; Gross, Steven
2012-01-01
The objectives of this work were to take the initial steps needed to develop a field programmable gate array (FPGA)- based wideband digital radiometer backend (>500 MHz bandwidth) that will enable passive microwave observations with minimal performance degradation in a radiofrequency-interference (RFI)-rich environment. As manmade RF emissions increase over time and fill more of the microwave spectrum, microwave radiometer science applications will be increasingly impacted in a negative way, and the current generation of spaceborne microwave radiometers that use broadband analog back ends will become severely compromised or unusable over an increasing fraction of time on orbit. There is a need to develop a digital radiometer back end that, for each observation period, uses digital signal processing (DSP) algorithms to identify the maximum amount of RFI-free spectrum across the radiometer band to preserve bandwidth to minimize radiometer noise (which is inversely related to the bandwidth). Ultimately, the objective is to incorporate all processing necessary in the back end to take contaminated input spectra and produce a single output value free of manmade signals to minimize data rates for spaceborne radiometer missions. But, to meet these objectives, several intermediate processing algorithms had to be developed, and their performance characterized relative to typical brightness temperature accuracy re quirements for current and future microwave radiometer missions, including those for measuring salinity, soil moisture, and snow pack.
NASA Technical Reports Server (NTRS)
Barone, Michael R. (Inventor); Murdoch, Karen (Inventor); Scull, Timothy D. (Inventor); Fort, James H. (Inventor)
2009-01-01
A rotary phase separator system generally includes a step-shaped rotary drum separator (RDS) and a motor assembly. The aspect ratio of the stepped drum minimizes power for both the accumulating and pumping functions. The accumulator section of the RDS has a relatively small diameter to minimize power losses within an axial length to define significant volume for accumulation. The pumping section of the RDS has a larger diameter to increase pumping head but has a shorter axial length to minimize power losses. The motor assembly drives the RDS at a low speed for separating and accumulating and a higher speed for pumping.
Non-Contact Conductivity Measurement for Automated Sample Processing Systems
NASA Technical Reports Server (NTRS)
Beegle, Luther W.; Kirby, James P.
2012-01-01
A new method has been developed for monitoring and control of automated sample processing and preparation especially focusing on desalting of samples before analytical analysis (described in more detail in Automated Desalting Apparatus, (NPO-45428), NASA Tech Briefs, Vol. 34, No. 8 (August 2010), page 44). The use of non-contact conductivity probes, one at the inlet and one at the outlet of the solid phase sample preparation media, allows monitoring of the process, and acts as a trigger for the start of the next step in the sequence (see figure). At each step of the muti-step process, the system is flushed with low-conductivity water, which sets the system back to an overall low-conductivity state. This measurement then triggers the next stage of sample processing protocols, and greatly minimizes use of consumables. In the case of amino acid sample preparation for desalting, the conductivity measurement will define three key conditions for the sample preparation process. First, when the system is neutralized (low conductivity, by washing with excess de-ionized water); second, when the system is acidified, by washing with a strong acid (high conductivity); and third, when the system is at a basic condition of high pH (high conductivity). Taken together, this non-contact conductivity measurement for monitoring sample preparation will not only facilitate automation of the sample preparation and processing, but will also act as a way to optimize the operational time and use of consumables
Attitude-Independent Magnetometer Calibration for Spin-Stabilized Spacecraft
NASA Technical Reports Server (NTRS)
Natanson, Gregory
2005-01-01
The paper describes a three-step estimator to calibrate a Three-Axis Magnetometer (TAM) using TAM and slit Sun or star sensor measurements. In the first step, the Calibration Utility forms a loss function from the residuals of the magnitude of the geomagnetic field. This loss function is minimized with respect to biases, scale factors, and nonorthogonality corrections. The second step minimizes residuals of the projection of the geomagnetic field onto the spin axis under the assumption that spacecraft nutation has been suppressed by a nutation damper. Minimization is done with respect to various directions of the body spin axis in the TAM frame. The direction of the spin axis in the inertial coordinate system required for the residual computation is assumed to be unchanged with time. It is either determined independently using other sensors or included in the estimation parameters. In both cases all estimation parameters can be found using simple analytical formulas derived in the paper. The last step is to minimize a third loss function formed by residuals of the dot product between the geomagnetic field and Sun or star vector with respect to the misalignment angle about the body spin axis. The method is illustrated by calibrating TAM for the Fast Auroral Snapshot Explorer (FAST) using in-flight TAM and Sun sensor data. The estimated parameters include magnetic biases, scale factors, and misalignment angles of the spin axis in the TAM frame. Estimation of the misalignment angle about the spin axis was inconclusive since (at least for the selected time interval) the Sun vector was about 15 degrees from the direction of the spin axis; as a result residuals of the dot product between the geomagnetic field and Sun vectors were to a large extent minimized as a by-product of the second step.
A general derivation and quantification of the third law of thermodynamics.
Masanes, Lluís; Oppenheim, Jonathan
2017-03-14
The most accepted version of the third law of thermodynamics, the unattainability principle, states that any process cannot reach absolute zero temperature in a finite number of steps and within a finite time. Here, we provide a derivation of the principle that applies to arbitrary cooling processes, even those exploiting the laws of quantum mechanics or involving an infinite-dimensional reservoir. We quantify the resources needed to cool a system to any temperature, and translate these resources into the minimal time or number of steps, by considering the notion of a thermal machine that obeys similar restrictions to universal computers. We generally find that the obtainable temperature can scale as an inverse power of the cooling time. Our results also clarify the connection between two versions of the third law (the unattainability principle and the heat theorem), and place ultimate bounds on the speed at which information can be erased.
A general derivation and quantification of the third law of thermodynamics
Masanes, Lluís; Oppenheim, Jonathan
2017-01-01
The most accepted version of the third law of thermodynamics, the unattainability principle, states that any process cannot reach absolute zero temperature in a finite number of steps and within a finite time. Here, we provide a derivation of the principle that applies to arbitrary cooling processes, even those exploiting the laws of quantum mechanics or involving an infinite-dimensional reservoir. We quantify the resources needed to cool a system to any temperature, and translate these resources into the minimal time or number of steps, by considering the notion of a thermal machine that obeys similar restrictions to universal computers. We generally find that the obtainable temperature can scale as an inverse power of the cooling time. Our results also clarify the connection between two versions of the third law (the unattainability principle and the heat theorem), and place ultimate bounds on the speed at which information can be erased. PMID:28290452
Two-step optimization of pressure and recovery of reverse osmosis desalination process.
Liang, Shuang; Liu, Cui; Song, Lianfa
2009-05-01
Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.
NASA Astrophysics Data System (ADS)
Blanco, S.; Orta-Rodriguez, R.; Delvasto, P.
2017-01-01
A hydrometallurgical recycling procedure for the recovery of a mixed rare earths sulfate and an electrodeposited Ni-Co alloy has been described. The latter step was found to be complex, due to the presence of several ions in the battery electrode materials. Electrochemical evaluation of the influence of the ions on the Ni-Co alloy deposition was carried out by cyclic voltammetry test. It was found that ions such as K+, Fe2+ and Mn2+ improved the current efficiency for the Ni-Co deposition process on a copper surface. On the other hand, Na+ and Zn2+ ions exhibited a deleterious behaviour, minimizing the values of the reduction current. The results were used to suggest the inclusion of additional steps in the process flow diagram of the recycling operation, in order to eliminate deleterious ions from the electroplating solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rice, Neal G.; Vu, M.; Kong, C.
Capsule drive in National Ignition Facility (NIF) indirect drive implosions is generated by x-ray illumination from cylindrical hohlraums. The cylindrical hohlraum geometry is axially symmetric but not spherically symmetric causing capsule-fuel drive asymmetries. We hypothesize that fabricating capsules asymmetric in wall thickness (shimmed) may compensate for drive asymmetries and improve implosion symmetry. Simulations suggest that for high compression implosions Legendre mode P 4 hohlraum flux asymmetries are the most detrimental to implosion performance. General Atomics has developed a diamond turning method to form a GDP capsule outer surface to a Legendre mode P 4 profile. The P 4 shape requiresmore » full capsule surface coverage. Thus, in order to avoid tool-lathe interference flipping the capsule part way through the machining process is required. This flipping process risks misalignment of the capsule causing a vertical step feature on the capsule surface. Recent trials have proven this step feature height can be minimized to ~0.25 µm.« less
Willumstad, Thomas P.; Haze, Olesya; Mak, Xiao Yin; Lam, Tin Yiu; Wang, Yu-Pu; Danheiser*, Rick L.
2013-01-01
Highly substituted polycyclic aromatic and heteroaromatic compounds are produced via a two-stage tandem benzannulation/cyclization strategy. The initial benzannulation step proceeds via a pericyclic cascade mechanism triggered by thermal or photochemical Wolff rearrangement of a diazo ketone. The photochemical process can be performed using a continuous flow reactor which facilitates carrying out reactions on a large scale and minimizes the time required for photolysis. Carbomethoxy ynamides as well as more ketenophilic bissilyl ynamines and N-sulfonyl and N-phosphoryl ynamides serve as the reaction partner in the benzannulation step. In the second stage of the strategy, RCM generates benzofused nitrogen heterocycles, and various heterocyclization processes furnish highly substituted and polycyclic indoles of types that were not available by using the previous cyclobutenone-based version of the tandem strategy. PMID:24116731
TG study of the Li0.4Fe2.4Zn0.2O4 ferrite synthesis
NASA Astrophysics Data System (ADS)
Lysenko, E. N.; Nikolaev, E. V.; Surzhikov, A. P.
2016-02-01
In this paper, the kinetic analysis of Li-Zn ferrite synthesis was studied using thermogravimetry (TG) method through the simultaneous application of non-linear regression to several measurements run at different heating rates (multivariate non-linear regression). Using TG-curves obtained for the four heating rates and Netzsch Thermokinetics software package, the kinetic models with minimal adjustable parameters were selected to quantitatively describe the reaction of Li-Zn ferrite synthesis. It was shown that the experimental TG-curves clearly suggest a two-step process for the ferrite synthesis and therefore a model-fitting kinetic analysis based on multivariate non-linear regressions was conducted. The complex reaction was described by a two-step reaction scheme consisting of sequential reaction steps. It is established that the best results were obtained using the Yander three-dimensional diffusion model at the first stage and Ginstling-Bronstein model at the second step. The kinetic parameters for lithium-zinc ferrite synthesis reaction were found and discussed.
Hyun, Seung-Hyun; Ryew, Che-Cheong
2017-12-01
The aim of this study is to compare and analyze the components of ground reaction force (GRF) relative to the foothold heights during downward step of 16-t truck. Adult males (n= 10) jumped downward from each 1st, 2nd, 3rd foothold step and driver's seat orderly using hand rail. Sampling rate of force components of 3 axis (medial-lateral [ML] GRF, anterior-posterior [AP] GRF, peak vertical force [PVF]), variables (COPx, COPy, COP area) of center of pressure (COP), loading rate, and stability index (ML, AP, vertical, and dynamic postural stability index [DPSI]) processed from GRF system was cut off at 1,000 Hz. and variables was processed with repeated one-way analysis of variance. AP GRF, PVF and loading rate showed higher value in case of not used hand rail than that used hand rail in all 1st, 2nd, and 3rd of foothold step. DPSI showed more lowered stability in order of 2nd, 3rd step than 1st foothold step used with hand rail, of which showed lowest stability from driver's seat. COPx, COPy, and COP area showed higher value in case of 2nd and 3rd than that of 1st of foothold step, and showed lowest stability from driver's seat. It is more desirable for cargo truck driver to utilize an available hand rail in order of 3rd, 2nd, and 1st of foothold step than downward stepping directly, thus by which may results in decrease of falling injuries and minimization of impulsive force transferring to muscular-skeletal system.
Evidence-based dentistry skill acquisition by second-year dental students.
Marshall, T A; McKernan, S C; Straub-Morarend, C L; Guzman-Armstrong, S; Marchini, L; Handoo, N Q; Cunningham, M A
2018-05-22
Identification and assessment of Evidence-based dentistry (EBD) outcomes have been elusive. Our objective was to describe EBD skill acquisition during the second (D2) year of pre-doctoral dental education and student competency at the end of the year. The first and fourth (final) curricular-required EBD Exercises (ie, application of the first 4 steps of the 5-Step evidence-based practice process applied to a real or hypothetical situation) completed by D2 students (n = 151) during 2014-2015 and 2015-2016 were evaluated to measure skill acquisition through use of a novel rubric with measures of performance from novice to expert. Exercises were evaluated on the performance for each step, identification of manuscript details and reflective commentary on manuscript components. Changes in performance were evaluated using the chi-square test for trend and the Wilcoxon signed-rank test. Seventy-eight per cent of students scored competent or higher on the Ask step at the beginning of the D2 year; scores improved with 58% scoring proficient or expert on the fourth Exercise (P < .001). Most students were advanced beginners or higher in the Acquire, Appraise and Apply steps at the beginning of the D2 year, with minimal growth observed during the year. Identification of manuscript details improved between the first and fourth Exercises (P = .015); however, depth of commentary skills did not change. Unlike previous investigations evaluating EBD knowledge or behaviour in a testing situation, we evaluated skill acquisition using applied Exercises. Consistent with their clinical and scientific maturity, D2 students minimally performed as advanced beginners at the end of their D2 year. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
[Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].
Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen
2013-10-01
To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.
Advances in Stallion Semen Cryopreservation.
Alvarenga, Marco Antonio; Papa, Frederico Ozanam; Ramires Neto, Carlos
2016-12-01
The use of stallion frozen semen minimizes the spread of disease, eliminates geographic barriers, and preserves the genetic material of the animal for an unlimited time. Significant progress on the frozen thawed stallion semen process and consequently fertility has been achieved over the last decade. These improvements not only increased fertility rates but also allowed cryopreservation of semen from "poor freezers." This article reviews traditional steps and new strategies for stallion semen handling and processing that are performed to overcome the deleterious effects of semen preservation and consequently improve frozen semen quality and fertility. Copyright © 2016 Elsevier Inc. All rights reserved.
Physical modeling of stepped spillways
USDA-ARS?s Scientific Manuscript database
Stepped spillways applied to embankment dams are becoming popular for addressing the rehabilitation of aging watershed dams, especially those situated in the urban landscape. Stepped spillways are typically placed over the existing embankment, which provides for minimal disturbance to the original ...
Schuler, Friedrich; Schwemmer, Frank; Trotter, Martin; Wadle, Simon; Zengerle, Roland; von Stetten, Felix; Paust, Nils
2015-07-07
Aqueous microdroplets provide miniaturized reaction compartments for numerous chemical, biochemical or pharmaceutical applications. We introduce centrifugal step emulsification for the fast and easy production of monodisperse droplets. Homogenous droplets with pre-selectable diameters in a range from 120 μm to 170 μm were generated with coefficients of variation of 2-4% and zero run-in time or dead volume. The droplet diameter depends on the nozzle geometry (depth, width, and step size) and interfacial tensions only. Droplet size is demonstrated to be independent of the dispersed phase flow rate between 0.01 and 1 μl s(-1), proving the robustness of the centrifugal approach. Centrifugal step emulsification can easily be combined with existing centrifugal microfluidic unit operations, is compatible to scalable manufacturing technologies such as thermoforming or injection moulding and enables fast emulsification (>500 droplets per second and nozzle) with minimal handling effort (2-3 pipetting steps). The centrifugal microfluidic droplet generation was used to perform the first digital droplet recombinase polymerase amplification (ddRPA). It was used for absolute quantification of Listeria monocytogenes DNA concentration standards with a total analysis time below 30 min. Compared to digital droplet polymerase chain reaction (ddPCR), with processing times of about 2 hours, the overall processing time of digital analysis was reduced by more than a factor of 4.
Pretreatment of corn stover using low-moisture anhydrous ammonia (LMAA) process.
Yoo, Chang Geun; Nghiem, Nhuan P; Hicks, Kevin B; Kim, Tae Hyun
2011-11-01
A simple pretreatment method using anhydrous ammonia was developed to minimize water and ammonia inputs for cellulosic ethanol production, termed the low moisture anhydrous ammonia (LMAA) pretreatment. In this method, corn stover with 30-70% moisture was contacted with anhydrous ammonia in a reactor under nearly ambient conditions. After the ammoniation step, biomass was subjected to a simple pretreatment step at moderate temperatures (40-120°C) for 48-144 h. Pretreated biomass was saccharified and fermented without an additional washing step. With 3% glucan loading of LMAA-treated corn stover under best treatment conditions (0.1g-ammonia+1.0 g-water per g biomass, 80°C, and 84 h), simultaneous saccharification and cofermentation test resulted in 24.9 g/l (89% of theoretical ethanol yield based on glucan+xylan in corn stover). Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
Updating the Finite Element Model of the Aerostructures Test Wing Using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Lung, Shun-Fat; Pak, Chan-Gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the aerostructures test wing (ATW), which was designed and tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
Christine Todoroki; Eini Lowell
2006-01-01
The silvicultural practice of pruning juvenile stems is a value-adding operation due to the formation of knot-free wood after the pruned branch stubs have healed. However it is not until after the log has been processed that the added value is realized. The motivation for this paper stems from wanting to extract as much of that added value as possible while minimizing...
Systems Maintenance Automated Repair Tasks (SMART)
NASA Technical Reports Server (NTRS)
Schuh, Joseph; Mitchell, Brent; Locklear, Louis; Belson, Martin A.; Al-Shihabi, Mary Jo Y.; King, Nadean; Norena, Elkin; Hardin, Derek
2010-01-01
SMART is a uniform automated discrepancy analysis and repair-authoring platform that improves technical accuracy and timely delivery of repair procedures for a given discrepancy (see figure a). SMART will minimize data errors, create uniform repair processes, and enhance the existing knowledge base of engineering repair processes. This innovation is the first tool developed that links the hardware specification requirements with the actual repair methods, sequences, and required equipment. SMART is flexibly designed to be useable by multiple engineering groups requiring decision analysis, and by any work authorization and disposition platform (see figure b). The organizational logic creates the link between specification requirements of the hardware, and specific procedures required to repair discrepancies. The first segment in the SMART process uses a decision analysis tree to define all the permutations between component/ subcomponent/discrepancy/repair on the hardware. The second segment uses a repair matrix to define what the steps and sequences are for any repair defined in the decision tree. This segment also allows for the selection of specific steps from multivariable steps. SMART will also be able to interface with outside databases and to store information from them to be inserted into the repair-procedure document. Some of the steps will be identified as optional, and would only be used based on the location and the current configuration of the hardware. The output from this analysis would be sent to a work authoring system in the form of a predefined sequence of steps containing required actions, tools, parts, materials, certifications, and specific requirements controlling quality, functional requirements, and limitations.
NASA Astrophysics Data System (ADS)
Hu, Qiang
2017-09-01
We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.
A model for critical thinking measurement of dental student performance.
Johnsen, David C; Finkelstein, Michael W; Marshall, Teresa A; Chalkley, Yvonne M
2009-02-01
The educational application of critical thinking has increased in the last twenty years with programs like problem-based learning. Performance measurement related to the dental student's capacity for critical thinking remains elusive, however. This article offers a model now in use to measure critical thinking applied to patient assessment and treatment planning across the four years of the dental school curriculum and across clinical disciplines. Two elements of the model are described: 1) a critical thinking measurement "cell," and 2) a list of minimally essential steps in critical thinking for patient assessment and treatment planning. Issues pertaining to this model are discussed: adaptations on the path from novice to expert, the role of subjective measurement, variations supportive of the model, and the correlation of individual and institutional assessment. The critical thinking measurement cell consists of interacting performance tasks and measures. The student identifies the step in the process (for example, chief complaint) with objective measurement; the student then applies the step to a patient or case with subjective measurement; the faculty member then combines the objective and subjective measurements into an evaluation on progress toward competence. The activities in the cell are then repeated until all the steps in the process have been addressed. A next task is to determine consistency across the four years and across clinical disciplines.
Aerospace Fuels From Nonpetroleum Raw Materials
NASA Technical Reports Server (NTRS)
Palaszewski, Bryan A.; Hepp, Aloysius F.; Kulis, Michael J.; Jaworske, Donald A.
2013-01-01
Recycling human metabolic and plastic wastes minimizes cost and increases efficiency by reducing the need to transport consumables and return trash, respectively, from orbit to support a space station crew. If the much larger costs of transporting consumables to the Moon and beyond are taken into account, developing waste recycling technologies becomes imperative and possibly mission enabling. Reduction of terrestrial waste streams while producing energy and/or valuable raw materials is an opportunity being realized by a new generation of visionary entrepreneurs; several relevant technologies are briefly compared, contrasted and assessed for space applications. A two-step approach to nonpetroleum raw materials utilization is presented; the first step involves production of supply or producer gas. This is akin to synthesis gas containing carbon oxides, hydrogen, and simple hydrocarbons. The second step involves production of fuel via the Sabatier process, a methanation reaction, or another gas-to-liquid technology, typically Fischer-Tropsch processing. Optimization to enhance the fraction of product stream relevant to transportation fuels via catalytic (process) development at NASA Glenn Research Center is described. Energy utilization is a concern for production of fuels whether for operation on the lunar or Martian surface, or beyond. The term green relates to not only mitigating excess carbon release but also to the efficiency of energy usage. For space, energy usage can be an essential concern. Another issue of great concern is minimizing impurities in the product stream(s), especially those that are potential health risks and/or could degrade operations through catalyst poisoning or equipment damage; technologies being developed to remove heteroatom impurities are discussed. Alternative technologies to utilize waste fluids, such as a propulsion option called the resistojet, are discussed. The resistojet is an electric propulsion technology with a powered thruster to vaporize and heat a propellant to high temperature, hot gases are subsequently passed through a converging-diverging nozzle expanding gases to supersonic velocities. A resistojet can accommodate many different fluids, including various reaction chamber (by-)products.
Aerospace Fuels from Nonpetroleum Raw Materials
NASA Technical Reports Server (NTRS)
Palaszewski, B. A.; Hepp, A. F.; Kulis, M. J.; Jaworske, D. A.
2013-01-01
Recycling human metabolic and plastic wastes minimizes cost and increases efficiency by reducing the need to transport consumables and return trash, respectively, from orbit to support a space station crew. If the much larger costs of transporting consumables to the Moon and beyond are taken into account, developing waste recycling technologies becomes imperative and possibly mission enabling. Reduction of terrestrial waste streams while producing energy and/or valuable raw materials is an opportunity being realized by a new generation of visionary entrepreneurs; several relevant technologies are briefly compared, contrasted and assessed for space applications. A two-step approach to nonpetroleum raw materials utilization is presented; the first step involves production of supply or producer gas. This is akin to synthesis gas containing carbon oxides, hydrogen, and simple hydrocarbons. The second step involves production of fuel via the Sabatier process, a methanation reaction, or another gas-to-liquid technology, typically Fischer- Tropsch processing. Optimization to enhance the fraction of product stream relevant to transportation fuels via catalytic (process) development at NASA GRC is described. Energy utilization is a concern for production of fuels whether for operation on the lunar or Martian surface, or beyond. The term "green" relates to not only mitigating excess carbon release but also to the efficiency of energy usage. For space, energy usage can be an essential concern. Other issues of great concern include minimizing impurities in the product stream(s), especially those that are potential health risks and/or could de-grade operations through catalyst poisoning or equipment damage; technologies being developed to remove heteroatom impurities are discussed. Alternative technologies to utilize waste fluids, such as a propulsion option called the resistojet, are discussed. The resistojet is an electric propulsion technology with a powered thruster to vaporize and heat a propellant to high temperature, hot gases are subsequently passed through a converging-diverging nozzle expanding gases to supersonic velocities. A resistojet can accommodate many different fluids, including various reaction chamber (by-)products.
Immobilization Techniques to Avoid Enzyme Loss from Oxidase-Based Biosensors: A One-Year Study
House, Jody L.; Anderson, Ellen M.; Ward, W. Kenneth
2007-01-01
Background Continuous amperometric sensors that measure glucose or lactate require a stable sensitivity, and glutaraldehyde crosslinking has been used widely to avoid enzyme loss. Nonetheless, little data is published on the effectiveness of enzyme immobilization with glutaraldehyde. Methods A combination of electrochemical testing and spectrophotometric assays was used to study the relationship between enzyme shedding and the fabrication procedure. In addition, we studied the relationship between the glutaraldehyde concentration and sensor performance over a period of one year. Results The enzyme immobilization process by glutaraldehyde crosslinking to glucose oxidase appears to require at least 24-hours at room temperature to reach completion. In addition, excess free glucose oxidase can be removed by soaking sensors in purified water for 20 minutes. Even with the addition of these steps, however, it appears that there is some free glucose oxidase entrapped within the enzyme layer which contributes to a decline in sensitivity over time. Although it reduces the ultimate sensitivity (probably via a change in the enzyme's natural conformation), glutaraldehyde concentration in the enzyme layer can be increased in order to minimize this instability. Conclusions After exposure of oxidase enzymes to glutaraldehyde, effective crosslinking requires a rinse step and a 24-hour incubation step. In order to minimize the loss of sensor sensitivity over time, the glutaraldehyde concentration can be increased. PMID:19888375
NASA Technical Reports Server (NTRS)
Himmel, R. P.
1975-01-01
Resin systems for coating hybrids prior to hermetic sealing are described. The resin systems are a flexible silicone junction resin system and a flexible cycloaliphatic epoxy resin system. The coatings are intended for application to the hybrid after all the chips have been assembled and wire bonded, but prior to hermetic sealing of the package. The purpose of the coating is to control particulate contamination by immobilizing particles and by passivating the hybrid. Recommended process controls for the purpose of minimizing contamination in hybrid microcircuit packages are given. Emphasis is placed on those critical hybrid processing steps in which contamination is most likely to occur.
Coulton, Simon; Bland, Martin; Crosby, Helen; Dale, Veronica; Drummond, Colin; Godfrey, Christine; Kaner, Eileen; Sweetman, Jennifer; McGovern, Ruth; Newbury-Birch, Dorothy; Parrott, Steve; Tober, Gillian; Watson, Judith; Wu, Qi
2017-11-01
To compare the clinical effectiveness and cost-effectiveness of a stepped-care intervention versus a minimal intervention for the treatment of older hazardous alcohol users in primary care. Multi-centre, pragmatic RCT, set in Primary Care in UK. Patients aged ≥ 55 years scoring ≥ 8 on the Alcohol Use Disorders Identification Test were allocated either to 5-min of brief advice or to 'Stepped Care': an initial 20-min of behavioural change counselling, with Step 2 being three sessions of Motivational Enhancement Therapy and Step 3 referral to local alcohol services (progression between each Step being determined by outcomes 1 month after each Step). Outcome measures included average drinks per day, AUDIT-C, alcohol-related problems using the Drinking Problems Index, health-related quality of life using the Short Form 12, costs measured from a NHS/Personal Social Care perspective and estimated health gains in quality adjusted life-years measured assessed EQ-5D. Both groups reduced alcohol consumption at 12 months but the difference between groups was small and not significant. No significant differences were observed between the groups on secondary outcomes. In economic terms stepped care was less costly and more effective than the minimal intervention. Stepped care does not confer an advantage over a minimal intervention in terms of reduction in alcohol use for older hazardous alcohol users in primary care. However, stepped care has a greater probability of being more cost-effective. Current controlled trials ISRCTN52557360. A stepped care approach was compared with brief intervention for older at-risk drinkers attending primary care. While consumption reduced in both groups over 12 months there was no significant difference between the groups. An economic analysis indicated the stepped care which had a greater probability of being more cost-effective than brief intervention. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.
Media Fill Test for validation of autologous leukocytes separation and labelling by (99m)Tc-HmPAO.
Urbano, Nicoletta; Modoni, Sergio; Schillaci, Orazio
2013-01-01
Manufacturing of sterile products must be carried out in order to minimize risks of microbiological contamination. White blood cells (WBC) labelled with (99m)Tc-exametazime ((99m)Tc-hexamethylpropyleneamine oxime; (99m)Tc-HMPAO) are being successfully applied in the field of infection/inflammation scintigraphy for many years. In our radiopharmacy lab, separation and labelling of autologous leukocytes with (99m)Tc-HMPAO were performed in a laminar flow cabinet not classified and placed in a controlled area, whereas (99m)Tc-HMPAO radiolabelling procedure was carried out in a hot cell with manipulator gloves. This study was conducted to validate this process using a Media Fill simulation test. The study was performed using sterile Tryptic Soy Broth (TSB) in place of active product, reproducing as closely as possible the routine aseptic production process with all the critical steps, as described in the our internal standard operative procedures (SOP). The final vials containing the media of each processed step were then incubated for 14 days and examined for the evidence of microbial growth. No evidence of turbidity was observed in all the steps assayed by the Media Fill. In the separation and labelling of autologous leukocytes with (99m)Tc-HmPAO, Media-Fill test represents a reliable tool to validate the aseptic process. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakel, Allen J.; Conner, Cliff; Quigley, Kevin
One of the missions of the Reduced Enrichment for Research and Test Reactors (RERTR) program (and now the National Nuclear Security Administrations Material Management and Minimization program) is to facilitate the use of low enriched uranium (LEU) targets for 99Mo production. The conversion from highly enriched uranium (HEU) to LEU targets will require five to six times more uranium to produce an equivalent amount of 99Mo. The work discussed here addresses the technical challenges encountered in the treatment of uranyl nitrate hexahydrate (UNH)/nitric acid solutions remaining after the dissolution of LEU targets. Specifically, the focus of this work is themore » calcination of the uranium waste from 99Mo production using LEU foil targets and the Modified Cintichem Process. Work with our calciner system showed that high furnace temperature, a large vent tube, and a mechanical shield are beneficial for calciner operation. One- and two-step direct calcination processes were evaluated. The high-temperature one-step process led to contamination of the calciner system. The two-step direct calcination process operated stably and resulted in a relatively large amount of material in the calciner cup. Chemically assisted calcination using peroxide was rejected for further work due to the difficulty in handling the products. Chemically assisted calcination using formic acid was rejected due to unstable operation. Chemically assisted calcination using oxalic acid was recommended, although a better understanding of its chemistry is needed. Overall, this work showed that the two-step direct calcination and the in-cup oxalic acid processes are the best approaches for the treatment of the UNH/nitric acid waste solutions remaining from dissolution of LEU targets for 99Mo production.« less
A simplified bioprocess for human alpha-fetoprotein production from inclusion bodies.
Leong, Susanna S J; Middelberg, Anton P J
2007-05-01
A simple and effective Escherichia coli (E. coli) bioprocess is demonstrated for the preparation of recombinant human alpha-fetoprotein (rhAFP), a pharmaceutically promising protein that has important immunomodulatory functions. The new rhAFP process employs only unit operations that are easy to scale and validate, and reduces the complexity embedded in existing inclusion body processing methods. A key requirement in the establishment of this process was the attainment of high purity rhAFP prior to protein refolding because (i) rhAFP binds easily to hydrophobic contaminants once refolded, and (ii) rhAFP aggregates during renaturation, in a contaminant- dependent way. In this work, direct protein extraction from cell suspension was coupled with a DNA precipitation-centrifugation step prior to purification using two simple chromatographic steps. Refolding was conducted using a single-step, redox-optimized dilution refolding protocol, with refolding success determined by reversed phase HPLC analysis, ELISA, and circular dichroism spectroscopy. Quantitation of DNA and protein contaminant loads after each unit operation showed that contaminant levels were reduced to levels comparable to traditional flowsheets. Protein microchemical modification due to carbamylation in this urea-based process was identified and minimized, yielding a final refolded and purified product that was significantly purified from carbamylated variants. Importantly, this work conclusively demonstrates, for the first time, that a chemical extraction process can substitute the more complex traditional inclusion body processing flowsheet, without compromising product purity and yield. This highly intensified and simplified process is expected to be of general utility for the preparation of other therapeutic candidates expressed as inclusion bodies. (c) 2006 Wiley Periodicals, Inc.
Crosstalk Cancellation for a Simultaneous Phase Shifting Interferometer
NASA Technical Reports Server (NTRS)
Olczak, Eugene (Inventor)
2014-01-01
A method of minimizing fringe print-through in a phase-shifting interferometer, includes the steps of: (a) determining multiple transfer functions of pixels in the phase-shifting interferometer; (b) computing a crosstalk term for each transfer function; and (c) displaying, to a user, a phase-difference map using the crosstalk terms computed in step (b). Determining a transfer function in step (a) includes measuring intensities of a reference beam and a test beam at the pixels, and measuring an optical path difference between the reference beam and the test beam at the pixels. Computing crosstalk terms in step (b) includes computing an N-dimensional vector, where N corresponds to the number of transfer functions, and the N-dimensional vector is obtained by minimizing a variance of a modulation function in phase shifted images.
Besselink, Marc G H; van Santvoort, Hjalmar C; Nieuwenhuijs, Vincent B; Boermeester, Marja A; Bollen, Thomas L; Buskens, Erik; Dejong, Cornelis H C; van Eijck, Casper H J; van Goor, Harry; Hofker, Sijbrand S; Lameris, Johan S; van Leeuwen, Maarten S; Ploeg, Rutger J; van Ramshorst, Bert; Schaapherder, Alexander F M; Cuesta, Miguel A; Consten, Esther C J; Gouma, Dirk J; van der Harst, Erwin; Hesselink, Eric J; Houdijk, Lex P J; Karsten, Tom M; van Laarhoven, Cees J H M; Pierie, Jean-Pierre E N; Rosman, Camiel; Bilgen, Ernst Jan Spillenaar; Timmer, Robin; van der Tweel, Ingeborg; de Wit, Ralph J; Witteman, Ben J M; Gooszen, Hein G
2006-04-11
The initial treatment of acute necrotizing pancreatitis is conservative. Intervention is indicated in patients with (suspected) infected necrotizing pancreatitis. In the Netherlands, the standard intervention is necrosectomy by laparotomy followed by continuous postoperative lavage (CPL). In recent years several minimally invasive strategies have been introduced. So far, these strategies have never been compared in a randomised controlled trial. The PANTER study (PAncreatitis, Necrosectomy versus sTEp up appRoach) was conceived to yield the evidence needed for a considered policy decision. 88 patients with (suspected) infected necrotizing pancreatitis will be randomly allocated to either group A) minimally invasive 'step-up approach' starting with drainage followed, if necessary, by videoscopic assisted retroperitoneal debridement (VARD) or group B) maximal necrosectomy by laparotomy. Both procedures are followed by CPL. Patients will be recruited from 20 hospitals, including all Dutch university medical centres, over a 3-year period. The primary endpoint is the proportion of patients suffering from postoperative major morbidity and mortality. Secondary endpoints are complications, new onset sepsis, length of hospital and intensive care stay, quality of life and total (direct and indirect) costs. To demonstrate that the 'step-up approach' can reduce the major morbidity and mortality rate from 45 to 16%, with 80% power at 5% alpha, a total sample size of 88 patients was calculated. The PANTER-study is a randomised controlled trial that will provide evidence on the merits of a minimally invasive 'step-up approach' in patients with (suspected) infected necrotizing pancreatitis.
Besselink, Marc GH; van Santvoort, Hjalmar C; Nieuwenhuijs, Vincent B; Boermeester, Marja A; Bollen, Thomas L; Buskens, Erik; Dejong, Cornelis HC; van Eijck, Casper HJ; van Goor, Harry; Hofker, Sijbrand S; Lameris, Johan S; van Leeuwen, Maarten S; Ploeg, Rutger J; van Ramshorst, Bert; Schaapherder, Alexander FM; Cuesta, Miguel A; Consten, Esther CJ; Gouma, Dirk J; van der Harst, Erwin; Hesselink, Eric J; Houdijk, Lex PJ; Karsten, Tom M; van Laarhoven, Cees JHM; Pierie, Jean-Pierre EN; Rosman, Camiel; Bilgen, Ernst Jan Spillenaar; Timmer, Robin; van der Tweel, Ingeborg; de Wit, Ralph J; Witteman, Ben JM; Gooszen, Hein G
2006-01-01
Background The initial treatment of acute necrotizing pancreatitis is conservative. Intervention is indicated in patients with (suspected) infected necrotizing pancreatitis. In the Netherlands, the standard intervention is necrosectomy by laparotomy followed by continuous postoperative lavage (CPL). In recent years several minimally invasive strategies have been introduced. So far, these strategies have never been compared in a randomised controlled trial. The PANTER study (PAncreatitis, Necrosectomy versus sTEp up appRoach) was conceived to yield the evidence needed for a considered policy decision. Methods/design 88 patients with (suspected) infected necrotizing pancreatitis will be randomly allocated to either group A) minimally invasive 'step-up approach' starting with drainage followed, if necessary, by videoscopic assisted retroperitoneal debridement (VARD) or group B) maximal necrosectomy by laparotomy. Both procedures are followed by CPL. Patients will be recruited from 20 hospitals, including all Dutch university medical centres, over a 3-year period. The primary endpoint is the proportion of patients suffering from postoperative major morbidity and mortality. Secondary endpoints are complications, new onset sepsis, length of hospital and intensive care stay, quality of life and total (direct and indirect) costs. To demonstrate that the 'step-up approach' can reduce the major morbidity and mortality rate from 45 to 16%, with 80% power at 5% alpha, a total sample size of 88 patients was calculated. Discussion The PANTER-study is a randomised controlled trial that will provide evidence on the merits of a minimally invasive 'step-up approach' in patients with (suspected) infected necrotizing pancreatitis. PMID:16606471
Liu, Gang; Bao, Jie
2017-12-01
Energy consumption and wastewater generation in cellulosic ethanol production are among the determinant factors on overall cost and technology penetration into fuel ethanol industry. This study analyzed the energy consumption and wastewater generation by the new biorefining process technology, dry acid pretreatment and biodetoxification (DryPB), as well as by the current mainstream technologies. DryPB minimizes the steam consumption to 8.63GJ and wastewater generation to 7.71tons in the core steps of biorefining process for production of one metric ton of ethanol, close to 7.83GJ and 8.33tons in corn ethanol production, respectively. The relatively higher electricity consumption is compensated by large electricity surplus from lignin residue combustion. The minimum ethanol selling price (MESP) by DryPB is below $2/gal and falls into the range of corn ethanol production cost. The work indicates that the technical and economical gap between cellulosic ethanol and corn ethanol has been almost filled up. Copyright © 2017 Elsevier Ltd. All rights reserved.
A holistic framework for design of cost-effective minimum water utilization network.
Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N
2008-07-01
Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion.
Statistical Mechanics and Dynamics of the Outer Solar System.I. The Jupiter/Saturn Zone
NASA Technical Reports Server (NTRS)
Grazier, K. R.; Newman, W. I.; Kaula, W. M.; Hyman, J. M.
1996-01-01
We report on numerical simulations designed to understand how the solar system evolved through a winnowing of planetesimals accreeted from the early solar nebula. This sorting process is driven by the energy and angular momentum and continues to the present day. We reconsider the existence and importance of stable niches in the Jupiter/Saturn Zone using greatly improved numerical techniques based on high-order optimized multi-step integration schemes coupled to roundoff error minimizing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This article describes how Broward County, Florida and Browning-Ferris Industries (Houston, Texas) implemented a highly accelerated recycling project that had a county-wide recycling system fully operational in 180 days. The program is a strong step toward speeding compliance with Florida's mandated 30 percent recycling goal. The 1.2 million citizens in Broward County began recycling materials in dual curbside bins October 1, 1993. Previously, the participating communities all acted autonomously. Minimal volumes of newspaper, aluminum, clear glass, and some plastic were collected by curbsort vehicles and processed at small local recycling centers.
NASA Astrophysics Data System (ADS)
Yin, Shizhuo; Zhang, Xueqian; Cheung, Joseph; Wu, Juntao; Zhan, Chun; Xue, Jinchao
2004-07-01
In this paper, a unique non-contact, minimum invasive technique for the assessment of mechanical properties of single cardiac myocyte is presented. The assessment process includes following major steps: (1) attach a micro magnetic bead to the cell to be measured, (2) measure the contractile performance of the cell under the different magnetic field loading, (3) calculate mechanical loading force, and (4) derive the contractile force from the measured contraction data under different magnetic field loading.
A Minimal Optical Trapping and Imaging Microscopy System
Hernández Candia, Carmen Noemí; Tafoya Martínez, Sara; Gutiérrez-Medina, Braulio
2013-01-01
We report the construction and testing of a simple and versatile optical trapping apparatus, suitable for visualizing individual microtubules (∼25 nm in diameter) and performing single-molecule studies, using a minimal set of components. This design is based on a conventional, inverted microscope, operating under plain bright field illumination. A single laser beam enables standard optical trapping and the measurement of molecular displacements and forces, whereas digital image processing affords real-time sample visualization with reduced noise and enhanced contrast. We have tested our trapping and imaging instrument by measuring the persistence length of individual double-stranded DNA molecules, and by following the stepping of single kinesin motor proteins along clearly imaged microtubules. The approach presented here provides a straightforward alternative for studies of biomaterials and individual biomolecules. PMID:23451216
Functionalization of SiO2 Surfaces for Si Monolayer Doping with Minimal Carbon Contamination.
van Druenen, Maart; Collins, Gillian; Glynn, Colm; O'Dwyer, Colm; Holmes, Justin D
2018-01-17
Monolayer doping (MLD) involves the functionalization of semiconductor surfaces followed by an annealing step to diffuse the dopant into the substrate. We report an alternative doping method, oxide-MLD, where ultrathin SiO 2 overlayers are functionalized with phosphonic acids for doping Si. Similar peak carrier concentrations were achieved when compared with hydrosilylated surfaces (∼2 × 10 20 atoms/cm 3 ). Oxide-MLD offers several advantages over conventional MLD, such as ease of sample processing, superior ambient stability, and minimal carbon contamination. The incorporation of an oxide layer minimizes carbon contamination by facilitating attachment of carbon-free precursors or by impeding carbon diffusion. The oxide-MLD strategy allows selection of many inexpensive precursors and therefore allows application to both p- and n-doping. The phosphonic acid-functionalized SiO 2 surfaces were investigated using X-ray photoelectron spectroscopy and attenuated total reflectance Fourier transform infrared spectroscopy, whereas doping was assessed using electrochemical capacitance voltage and Hall measurements.
Comparison of Minimally and More Invasive Methods of Determining Mixed Venous Oxygen Saturation.
Smit, Marli; Levin, Andrew I; Coetzee, Johan F
2016-04-01
To investigate the accuracy of a minimally invasive, 2-step, lookup method for determining mixed venous oxygen saturation compared with conventional techniques. Single-center, prospective, nonrandomized, pilot study. Tertiary care hospital, university setting. Thirteen elective cardiac and vascular surgery patients. All participants received intra-arterial and pulmonary artery catheters. Minimally invasive oxygen consumption and cardiac output were measured using a metabolic module and lithium-calibrated arterial waveform analysis (LiDCO; LiDCO, London), respectively. For the minimally invasive method, Step 1 involved these minimally invasive measurements, and arterial oxygen content was entered into the Fick equation to calculate mixed venous oxygen content. Step 2 used an oxyhemoglobin curve spreadsheet to look up mixed venous oxygen saturation from the calculated mixed venous oxygen content. The conventional "invasive" technique used pulmonary artery intermittent thermodilution cardiac output, direct sampling of mixed venous and arterial blood, and the "reverse-Fick" method of calculating oxygen consumption. LiDCO overestimated thermodilution cardiac output by 26%. Pulmonary artery catheter-derived oxygen consumption underestimated metabolic module measurements by 27%. Mixed venous oxygen saturation differed between techniques; the calculated values underestimated the direct measurements by between 12% to 26.3%, this difference being statistically significant. The magnitude of the differences between the minimally invasive and invasive techniques was too great for the former to act as a surrogate of the latter and could adversely affect clinical decision making. Copyright © 2016 Elsevier Inc. All rights reserved.
Biomechanical influences on balance recovery by stepping.
Hsiao, E T; Robinovitch, S N
1999-10-01
Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.
Esthetic Rehabilitation of Anterior Teeth with Laminates Composite Veneers
Riva, Giancarlo
2014-01-01
No- or minimal-preparation veneers associated with enamel preservation offer predictable results in esthetic dentistry; indirect additive anterior composite restorations represent a quick, minimally invasive, inexpensive, and repairable option for a smile enhancement treatment plan. Current laboratory techniques associated with a strict clinical protocol satisfy patients' restorative and esthetic needs. The case report presented describes minimal invasive treatment of four upper incisors with laminate nanohybrid resin composite veneers. A step-by-step protocol is proposed for diagnostic evaluation, mock-up fabrication and trial, teeth preparation and impression, and adhesive cementation. The resolution of initial esthetic issues, patient satisfaction, and nice integration of indirect restorations confirmed the success of this anterior dentition rehabilitation. PMID:25013730
Silicon-Based Ceramic-Matrix Composites for Advanced Turbine Engines: Some Degradation Issues
NASA Technical Reports Server (NTRS)
Thomas-Ogbuji, Linus U. J.
2000-01-01
SiC/BN/SiC composites are designed to take advantage of the high specific strengths and moduli of non-oxide ceramics, and their excellent resistance to creep, chemical attack, and oxidation, while circumventing the brittleness inherent in ceramics. Hence, these composites have the potential to take turbine engines of the future to higher operating temperatures than is achievable with metal alloys. However, these composites remain developmental and more work needs to be done to optimize processing techniques. This paper highlights the lingering issue of pest degradation in these materials and shows that it results from vestiges of processing steps and can thus be minimized or eliminated.
The flight planning - flight management connection
NASA Technical Reports Server (NTRS)
Sorensen, J. A.
1984-01-01
Airborne flight management systems are currently being implemented to minimize direct operating costs when flying over a fixed route between a given city pair. Inherent in the design of these systems is that the horizontal flight path and wind and temperature models be defined and input into the airborne computer before flight. The wind/temperature model and horizontal path are products of the flight planning process. Flight planning consists of generating 3-D reference trajectories through a forecast wind field subject to certain ATC and transport operator constraints. The interrelationships between flight management and flight planning are reviewed, and the steps taken during the flight planning process are summarized.
Study on formation of step bunching on 6H-SiC (0001) surface by kinetic Monte Carlo method
NASA Astrophysics Data System (ADS)
Li, Yuan; Chen, Xuejiang; Su, Juan
2016-05-01
The formation and evolution of step bunching during step-flow growth of 6H-SiC (0001) surfaces were studied by three-dimensional kinetic Monte Carlo (KMC) method and compared with the analytic model based on the theory of Burton-Cabera-Frank (BCF). In the KMC model the crystal lattice was represented by a structured mesh which fixed the position of atoms and interatomic bonding. The events considered in the model were adatoms adsorption and diffusion on the terrace, and adatoms attachment, detachment and interlayer transport at the step edges. In addition, effects of Ehrlich-Schwoebel (ES) barriers at downward step edges and incorporation barriers at upwards step edges were also considered. In order to obtain more elaborate information for the behavior of atoms in the crystal surface, silicon and carbon atoms were treated as the minimal diffusing species. KMC simulation results showed that multiple-height steps were formed on the vicinal surface oriented toward [ 1 1 bar 00 ] or [ 11 2 bar 0 ] directions. And then the formation mechanism of the step bunching was analyzed. Finally, to further analyze the formation processes of step bunching, a one-dimensional BCF analytic model with ES and incorporation barriers was used, and then it was solved numerically. In the BCF model, the periodic boundary conditions (PBC) were applied, and the parameters were corresponded to those used in the KMC model. The evolution character of step bunching was consistent with the results obtained by KMC simulation.
Giersch, Anne; Mishara, Aaron L.
2017-01-01
Decades ago, several authors have proposed that disorders in automatic processing lead to intrusive symptoms or abnormal contents in the consciousness of people with schizophrenia. However, since then, studies have mainly highlighted difficulties in patients’ conscious experiencing and processing but rarely explored how unconscious and conscious mechanisms may interact in producing this experience. We report three lines of research, focusing on the processing of spatial frequencies, unpleasant information, and time-event structure that suggest that impairments occur at both the unconscious and conscious level. We argue that focusing on unconscious, physiological and automatic processing of information in patients, while contrasting that processing with conscious processing, is a first required step before understanding how distortions or other impairments emerge at the conscious level. We then indicate that the phenomenological tradition of psychiatry supports a similar claim and provides a theoretical framework helping to understand the relationship between the impairments and clinical symptoms. We base our argument on the presence of disorders in the minimal self in patients with schizophrenia. The minimal self is tacit and non-verbal and refers to the sense of bodily presence. We argue this sense is shaped by unconscious processes, whose alteration may thus affect the feeling of being a unique individual. This justifies a focus on unconscious mechanisms and a distinction from those associated with consciousness. PMID:29033868
Effect of Processing on Silk-Based Biomaterials: Reproducibility and Biocompatibility
Wray, Lindsay S.; Hu, Xiao; Gallego, Jabier; Georgakoudi, Irene; Omenetto, Fiorenzo G.; Schmidt, Daniel; Kaplan, David L.
2012-01-01
Silk fibroin has been successfully used as a biomaterial for tissue regeneration. In order to prepare silk fibroin biomaterials for human implantation a series of processing steps are required to purify the protein. Degumming to remove inflammatory sericin is a crucial step related to biocompatibility and variability in the material. Detailed characterization of silk fibroin degumming is reported. The degumming conditions significantly affected cell viability on the silk fibroin material and the ability to form three-dimensional porous scaffolds from the silk fibroin, but did not affect macrophage activation or β-sheet content in the materials formed. Methods are also provided to determine the content of residual sericin in silk fibroin solutions and to assess changes in silk fibroin molecular weight. Amino acid composition analysis was used to detect sericin residuals in silk solutions with a detection limit between 1.0% and 10% wt/wt, while fluorescence spectroscopy was used to reproducibly distinguish between silk samples with different molecular weights. Both methods are simple and require minimal sample volume, providing useful quality control tools for silk fibroin preparation processes. PMID:21695778
Adaptive Implicit Non-Equilibrium Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby; Wang, Zhen; Berrill, Mark A
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Post-fall decision tree development and implementation.
Gordon, Bonita M; Wnek, Theresa Frissora; Glorius, Nancy; Hasdorff, Carmen; Shiverski, Joyce; Ginn, Janet
2010-01-01
Care and evaluation after a patient's fall require a number of steps to ensure that appropriate care is given and injury is minimized. Astute and appropriate assessment skills with strategic interventions and communication can minimize the harm from a fall. Post-Fall Decision Guidelines were developed to guide care and treatment and to identify potential complications after a patient has fallen. This systematic approach mobilizes the steps of communication, using the Situation-Background-Assessment-Recommendation (SBAR) format, and guides assessment interventions.
Cima, Robert R; Brown, Michael J; Hebl, James R; Moore, Robin; Rogers, James C; Kollengode, Anantha; Amstutz, Gwendolyn J; Weisbrod, Cheryl A; Narr, Bradly J; Deschamps, Claude
2011-07-01
Operating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency. A multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation. Across 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day. Use of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Belabbassi, L.; Garzio, L. M.; Smith, M. J.; Knuth, F.; Vardaro, M.; Kerfoot, J.
2016-02-01
The Ocean Observatories Initiative (OOI), funded by the National Science Foundation, provides users with access to long-term datasets from a variety of deployed oceanographic sensors. The Pioneer Array in the Atlantic Ocean off the Coast of New England hosts 10 moorings and 6 gliders. Each mooring is outfitted with 6 to 19 different instruments telemetering more than 1000 data streams. These data are available to science users to collaborate on common scientific goals such as water quality monitoring and scale variability measures of continental shelf processes and coastal open ocean exchanges. To serve this purpose, the acquired datasets undergo an iterative multi-step quality assurance and quality control procedure automated to work with all types of data. Data processing involves several stages, including a fundamental pre-processing step when the data are prepared for processing. This takes a considerable amount of processing time and is often not given enough thought in development initiatives. The volume and complexity of OOI data necessitates the development of a systematic diagnostic tool to enable the management of a comprehensive data information system for the OOI arrays. We present two examples to demonstrate the current OOI pre-processing diagnostic tool. First, Data Filtering is used to identify incomplete, incorrect, or irrelevant parts of the data and then replaces, modifies or deletes the coarse data. This provides data consistency with similar datasets in the system. Second, Data Normalization occurs when the database is organized in fields and tables to minimize redundancy and dependency. At the end of this step, the data are stored in one place to reduce the risk of data inconsistency and promote easy and efficient mapping to the database.
Ni-MH spent batteries: a raw material to produce Ni-Co alloys.
Lupi, Carla; Pilone, Daniela
2002-01-01
Ni-MH spent batteries are heterogeneous and complex materials, so any kind of metallurgical recovery process needs a mechanical pre-treatment at least to separate irony materials and recyclable plastic materials (like ABS) respectively, in order to get additional profit from this saleable scrap, as well as minimize waste arising from the braking separation process. Pyrometallurgical processing is not suitable to treat Ni-MH batteries mainly because of Rare Earths losses in the slag. On the other hand, the hydrometallurgical method, that offers better opportunities in terms of recovery yield and higher purity of Ni, Co, and RE, requires several process steps as shown in technical literature. The main problems during leach liquor purification are the removal of elements such as Mn, Zn, Cd, dissolved during the leaching step, and the separation of Ni from Co. In the present work, the latter problem is overcome by co-deposition of a Ni-35/40%w Co alloy of good quality. The experiments carried out in a laboratory scale pilot-plant show that a current efficiency higher than 91% can be reached in long duration electrowinning tests performed at 50 degrees C and 4.3 catholyte pH.
Process for selection of oxygen-tolerant algal mutants that produce H{sub 2}
Ghirardi, M.L.; Seibert, M.
1999-02-16
A process for selection of oxygen-tolerant, H{sub 2}-producing algal mutant cells comprises: (a) growing algal cells photoautotrophically under fluorescent light to mid log phase; (b) inducing algal cells grown photoautotrophically under fluorescent light to mid log phase in step (a) anaerobically by (1) resuspending the cells in a buffer solution and making said suspension anaerobic with an inert gas and (2) incubating the suspension in the absence of light at ambient temperature; (c) treating the cells from step (b) with metronidazole, sodium azide, and added oxygen to controlled concentrations in the presence of white light; (d) washing off metronidazole and sodium azide to obtain final cell suspension; (e) plating said final cell suspension on a minimal medium and incubating in light at a temperature sufficient to enable colonies to appear; (f) counting the number of colonies to determine the percent of mutant survivors; and (g) testing survivors to identify oxygen-tolerant H{sub 2}-producing mutants. 5 figs.
Unveiling the Biometric Potential of Finger-Based ECG Signals
Lourenço, André; Silva, Hugo; Fred, Ana
2011-01-01
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications. PMID:21837235
Thermal Model Development for Ares I-X
NASA Technical Reports Server (NTRS)
Amundsen, Ruth M.; DelCorso, Joe
2008-01-01
Thermal analysis for the Ares I-X vehicle has involved extensive thermal model integration, since thermal models of vehicle elements came from several different NASA and industry organizations. Many valuable lessons were learned in terms of model integration and validation. Modeling practices such as submodel, analysis group and symbol naming were standardized to facilitate the later model integration. Upfront coordination of coordinate systems, timelines, units, symbols and case scenarios was very helpful in minimizing integration rework. A process for model integration was developed that included pre-integration runs and basic checks of both models, and a step-by-step process to efficiently integrate one model into another. Extensive use of model logic was used to create scenarios and timelines for avionics and air flow activation. Efficient methods of model restart between case scenarios were developed. Standardization of software version and even compiler version between organizations was found to be essential. An automated method for applying aeroheating to the full integrated vehicle model, including submodels developed by other organizations, was developed.
Capsule Shimming Developments for National Ignition Facility (NIF) Hohlraum Asymmetry Experiments
Rice, Neal G.; Vu, M.; Kong, C.; ...
2017-12-20
Capsule drive in National Ignition Facility (NIF) indirect drive implosions is generated by x-ray illumination from cylindrical hohlraums. The cylindrical hohlraum geometry is axially symmetric but not spherically symmetric causing capsule-fuel drive asymmetries. We hypothesize that fabricating capsules asymmetric in wall thickness (shimmed) may compensate for drive asymmetries and improve implosion symmetry. Simulations suggest that for high compression implosions Legendre mode P 4 hohlraum flux asymmetries are the most detrimental to implosion performance. General Atomics has developed a diamond turning method to form a GDP capsule outer surface to a Legendre mode P 4 profile. The P 4 shape requiresmore » full capsule surface coverage. Thus, in order to avoid tool-lathe interference flipping the capsule part way through the machining process is required. This flipping process risks misalignment of the capsule causing a vertical step feature on the capsule surface. Recent trials have proven this step feature height can be minimized to ~0.25 µm.« less
Process for selection of Oxygen-tolerant algal mutants that produce H.sub.2
Ghirardi, Maria L.; Seibert, Michael
1999-01-01
A process for selection of oxygen-tolerant, H.sub.2 -producing algal mutant cells comprising: (a) growing algal cells photoautotrophically under fluorescent light to mid log phase; (b) inducing algal cells grown photoautrophically under fluorescent light to mid log phase in step (a) anaerobically by (1) resuspending the cells in a buffer solution and making said suspension anaerobic with an inert gas; (2) incubating the suspension in the absence of light at ambient temperature; (c) treating the cells from step (b) with metronidazole, sodium azide, and added oxygen to controlled concentrations in the presence of white light. (d) washing off metronidazole and sodium azide to obtain final cell suspension; (e) plating said final cell suspension on a minimal medium and incubating in light at a temperature sufficient to enable colonies to appear; (f) counting the number of colonies to determine the percent of mutant survivors; and (g) testing survivors to identify oxygen-tolerant H.sub.2 -producing mutants.
John, Susan D; Moore, Quentin T; Herrmann, Tracy; Don, Steven; Powers, Kevin; Smith, Susan N; Morrison, Greg; Charkot, Ellen; Mills, Thalia T; Rutz, Lois; Goske, Marilyn J
2013-10-01
Transition from film-screen to digital radiography requires changes in radiographic technique and workflow processes to ensure that the minimum radiation exposure is used while maintaining diagnostic image quality. Checklists have been demonstrated to be useful tools for decreasing errors and improving safety in several areas, including commercial aviation and surgical procedures. The Image Gently campaign, through a competitive grant from the FDA, developed a checklist for technologists to use during the performance of digital radiography in pediatric patients. The checklist outlines the critical steps in digital radiography workflow, with an emphasis on steps that affect radiation exposure and image quality. The checklist and its accompanying implementation manual and practice quality improvement project are open source and downloadable at www.imagegently.org. The authors describe the process of developing and testing the checklist and offer suggestions for using the checklist to minimize radiation exposure to children during radiography. Copyright © 2013 American College of Radiology. All rights reserved.
Unveiling the biometric potential of finger-based ECG signals.
Lourenço, André; Silva, Hugo; Fred, Ana
2011-01-01
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.
NASA Technical Reports Server (NTRS)
Lawson, Larry
2003-01-01
It was critical for our team to find a radically different way of doing business. Deciding to build the airframe out of composites was the first step, refining processes from the boat building industry was second, and the final step was choosing a supplier. Lockheed Martin built the first prototypes at our Skunk Works facility in Palmdale, California. These units were hand-built and used early prototypical tooling. They looked great but were not affordable. We had to focus on minimizing touch labor and cycle time and reducing material costs. We needed a company to produce the composite quilts we would use to avoid hand lay-ups. The company we found surprised a lot of people. We partnered with a small company outside of Boston whose primary business was making baseball bats and golf club shafts.
Towards Computing the Battle for Hearts and Minds: Lessons from the Vendée
NASA Astrophysics Data System (ADS)
Hurwitz, Roger
We analyze the conditions and processes that spawned a historic case of insurgency in the context of regime change. The analysis is an early step in the development of formal models that capture the complex dynamics of insurgencies, resistance and other conflicts that are often characterized as "battles for hearts and minds" (henceforth BHAM). The characterization, however, flattens the complexities of the conflict. It suggests bloodless engagements where victories come from public relations and demonstration projects that foster positive attitudes among a subject population. Officials conducting these battles sometimes use the label to mask their ignorance of the complexities and sometimes with the intention of minimizing their difficulties in dealing with them. Modeling can therefore be a constructive step in overcoming their impoverished thinking.
The current role of on-line extraction approaches in clinical and forensic toxicology.
Mueller, Daniel M
2014-08-01
In today's clinical and forensic toxicological laboratories, automation is of interest because of its ability to optimize processes, to reduce manual workload and handling errors and to minimize exposition to potentially infectious samples. Extraction is usually the most time-consuming step; therefore, automation of this step is reasonable. Currently, from the field of clinical and forensic toxicology, methods using the following on-line extraction techniques have been published: on-line solid-phase extraction, turbulent flow chromatography, solid-phase microextraction, microextraction by packed sorbent, single-drop microextraction and on-line desorption of dried blood spots. Most of these published methods are either single-analyte or multicomponent procedures; methods intended for systematic toxicological analysis are relatively scarce. However, the use of on-line extraction will certainly increase in the near future.
Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda
2018-03-01
A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sapra, Mahak; Pawar, Amol Ashok; Venkataraman, Chandra
2016-02-15
Surface modification of nanoparticles during aerosol or gas-phase synthesis, followed by direct transfer into liquid media can be used to produce stable water-dispersed nanoparticle suspensions. This work investigates a single-step, aerosol process for in-situ surface-modification of nanoparticles. Previous studies have used a two-step sublimation-condensation mechanism following droplet drying, for surface modification, while the present process uses a liquid precursor containing two solutes, a matrix lipid and a surface modifying agent. A precursor solution in chloroform, of stearic acid lipid, with 4 %w/w of surface-active, physiological molecules [1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC), 1,2-dipalmitoyl-sn-glycero-3-phospho-(1'-rac-glycerol)-sodium salt (DPPG) or 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N-[methoxy (polyethylene glycol) 2000]-ammonium salt (DPPE-PEG)] was processed in an aerosol reactor at a low gas temperatures. The surface modified nanoparticles were characterized for morphology, surface composition and suspension properties. Spherical, surface-modified lipid nanoparticles with median mobility diameters in the range of 105-150nm and unimodal size distributions were obtained. Fourier transform infra-red spectroscopy (FTIR) measurements confirmed the presence of surface-active molecules on external surfaces of modified lipid nanoparticles. Surface modified nanoparticles exhibited improved suspension stability, compared to that of pure lipid nanoparticles for a period of 30days. Lowest aggregation was observed in DPPE-PEG modified nanoparticles from combined electrostatic and steric effects. The study provides a single-step aerosol method for in-situ surface modification of nanoparticles, using minimal amounts of surface active agents, to make stable, aqueous nanoparticle suspensions. Copyright © 2015 Elsevier Inc. All rights reserved.
Milner, Phillip J; Martell, Jeffrey D; Siegelman, Rebecca L; Gygi, David; Weston, Simon C; Long, Jeffrey R
2018-01-07
Alkyldiamine-functionalized variants of the metal-organic framework Mg 2 (dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary , secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behavior likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2 (dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2 (dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2 (pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para -carboxylate), which, in contrast to Mg 2 (dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2 (pc-dobpdc) with large diamines such as N -( n -heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.
Milner, Phillip J.; Martell, Jeffrey D.; Siegelman, Rebecca L.; ...
2017-10-26
Alkyldiamine-functionalized variants of the metal–organic framework Mg 2(dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary,secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behaviormore » likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2(dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2(dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2(pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para-carboxylate), which, in contrast to Mg 2(dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2(pc-dobpdc) with large diamines such as N-(n-heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.« less
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
Hydrolysis of ferric chloride in solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lussiez, G.; Beckstead, L.
1996-11-01
The Detox{trademark} process uses concentrated ferric chloride and small amounts of catalysts to oxidize organic compounds. It is under consideration for oxidizing transuranic organic wastes. Although the solution is reused extensively, at some point it will reach the acceptable limit of radioactivity or maximum solubility of the radioisotopes. This solution could be cemented, but the volume would be increased substantially because of the poor compatibility of chlorides and cement. A process has been developed that recovers the chloride ions as HCl and either minimizes the volume of radioactive waste or permits recycling of the radioactive chlorides. The process involves amore » two-step hydrolysis at atmospheric pressure, or preferably under a slight vacuum, and relatively low temperature, about 200{degrees}C. During the first step of the process, hydrolysis occurs according to the reaction below: FeCl{sub 3 liquid} + H{sub 2}O {r_arrow} FeOCl{sub solid} + 2 HCl{sub gas} During the second step, the hot, solid, iron oxychloride is sprayed with water or placed in contact with steam, and hydrolysis proceeds to the iron oxide according to the following reaction: 2 FeOCl{sub solid} + H{sub 2}O {r_arrow} Fe{sub 2}O{sub 3 solid} + 2 HCl{sub gas}. The iron oxide, which contains radioisotopes, can then be disposed of by cementation or encapsulation. Alternately, these chlorides can be washed off of the solids and can then either be recycled or disposed of in some other way.« less
Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong
2017-01-01
A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed “two-step optimization for spatial accessibility improvement (2SO4SAI).” The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China. PMID:28484707
Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong; Wang, Fahui
2017-01-01
A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed "two-step optimization for spatial accessibility improvement (2SO4SAI)." The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China.
Kerosene: Contributing agent to xylene as a clearing agent in tissue processing.
Shah, Amisha Ashokkumar; Kulkarni, Dinraj; Ingale, Yashwant; Koshy, Ajit V; Bhagalia, Sanjay; Bomble, Nikhil
2017-01-01
Research methodology in oral and maxillofacial pathology has illimitable potential. The tissue processing involves many steps of which one of the most important step is "Clearing," which is a process of replacing dehydrant with a substance which is miscible with embedding medium or paraffin wax. Xylene is one of the common clearing agents used in laboratory, but it is also hazardous. The main aim of this study is to substitute conventionally used xylene by a mixture of kerosene and xylene in clearing steps without altering the morphology and staining characteristics of tissue sections. This will also minimize the toxic effects and tend to be more economical. One hundred and twenty bits of tissue samples were collected, each randomly separated into 4 groups (A, B, C and D) and kept for routine tissue processing till the step of clearing; during the step of clearing instead of conventional xylene, we used mixture of xylene and kerosene in 4 ratios ([A-K:X - 50:50]; [B-K:X - 70:30]; [C - Ab. Kerosene]; [D - Ab. Xylene - as control]) and observed for the light microscopic study adopting H and E staining, IHC (D2-40), Special stains (periodic acid-Schiff and congo red) procedure. The result was subjected to statistical analysis by using Fisher's exact test. The results obtained from the present study were compared with control group, i.e., D and it was observed that Groups A and B were absolutely cleared without altering the morphology of tissue and cellular details; optimum embedding characteristics and better staining characteristics were also noted, whereas Group C presents poor staining characteristics with reduced cellular details. Embedded tissues in Group C presented with rough, irregular surface and also appeared shrunken. Combined mixture of xylene and kerosene as a clearing agent in different ratio, i.e., Group A (K:X - 50:50) and B (K:X - 70:30) can be used without posing any health risk or compromising the cellular integrity.
Anti-Legionella activity of staphylococcal hemolytic peptides.
Marchand, A; Verdon, J; Lacombe, C; Crapart, S; Héchard, Y; Berjeaud, J M
2011-05-01
A collection of various Staphylococci was screened for their anti-Legionella activity. Nine of the tested strains were found to secrete anti-Legionella compounds. The culture supernatants of the strains, described in the literature to produce hemolytic peptides, were successfully submitted to a two step purification process. All the purified compounds, except one, corresponded to previously described hemolytic peptides and were not known for their anti-Legionella activity. By comparison of the minimal inhibitory concentrations, minimal permeabilization concentrations, decrease in the number of cultivable bacteria, hemolytic activity and selectivity, the purified peptides could be separated in two groups. First group, with warnericin RK as a leader, corresponds to the more hemolytic and bactericidal peptides. The peptides of the second group, represented by the PSMα from Staphylococcus epidermidis, appeared bacteriostatic and poorly hemolytic. Copyright © 2011 Elsevier Inc. All rights reserved.
New Noble Gas Studies on Popping Rocks from the Mid-Atlantic Ridge near 14°N
NASA Astrophysics Data System (ADS)
Kurz, M. D.; Curtice, J.; Jones, M.; Péron, S.; Wanless, V. D.; Mittelstaedt, E. L.; Soule, S. A.; Klein, F.; Fornari, D. J.
2017-12-01
New Popping Rocks were recovered in situ on the Mid-Atlantic Ridge (MAR) near 13.77° N, using HOV Alvin on cruise AT33-03 in 2016 on RV Atlantis. We report new helium, neon, argon, and CO2 step-crushing measurements on a subset of the glass samples, with a focus on a new procedure to collect seafloor samples with minimal exposure to air. Glassy seafloor basalts were collected in sealed containers using the Alvin mechanical arm and transported to the surface without atmospheric exposure. On the ship, the seawater was drained, the volcanic glass was transferred to stainless steel ultra-high-vacuum containers (in an oxygen-free glove box), which were then evacuated using a turbo-molecular pump and sealed for transport under vacuum. All processing was carried out under a nitrogen atmosphere. A control sample was collected from each pillow outcrop and processed normally in air. The preliminary step-crushing measurements show that the anaerobically collected samples have systematically higher 20Ne/22Ne, 21Ne/22Ne and 40Ar/36Ar than the control samples. Helium abundances and isotopes are consistent between anaerobically collected samples and control samples. These results suggest that minimizing atmospheric exposure during sample processing can significantly reduce air contamination for heavy noble gases, providing a new option for seafloor sampling. Higher vesicle abundances appear to yield a greater difference in neon and argon isotopes between the anaerobic and control samples, suggesting that atmospheric contamination is related to vesicle abundance, possibly through micro-fractures. The new data show variability in the maximum mantle neon and argon isotopic compositions, and abundance ratios, suggesting that the samples experienced variable outgassing prior to eruption, and may represent different phases of a single eruption, or multiple eruptions.
Waste Minimization Study on Pyrochemical Reprocessing Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boussier, H.; Conocar, O.; Lacquement, J.
2006-07-01
Ideally a new pyro-process should not generate more waste, and should be at least as safe and cost effective as the hydrometallurgical processes currently implemented at industrial scale. This paper describes the thought process, the methodology and some results obtained by process integration studies to devise potential pyro-processes and to assess their capability of achieving this challenging objective. As example the assessment of a process based on salt/metal reductive extraction, designed for the reprocessing of Generation IV carbide spent fuels, is developed. Salt/metal reductive extraction uses the capability of some metals, aluminum in this case, to selectively reduce actinide fluoridesmore » previously dissolved in a fluoride salt bath. The reduced actinides enter the metal phase from which they are subsequently recovered; the fission products remain in the salt phase. In fact, the process is not so simple, as it requires upstream and downstream subsidiary steps. All these process steps generate secondary waste flows representing sources of actinide leakage and/or FP discharge. In aqueous processes the main solvent (nitric acid solution) has a low boiling point and evaporate easily or can be removed by distillation, thereby leaving limited flow containing the dissolved substance behind to be incorporated in a confinement matrix. From the point of view of waste generation, one main handicap of molten salt processes, is that the saline phase (fluoride in our case) used as solvent is of same nature than the solutes (radionuclides fluorides) and has a quite high boiling point. So it is not so easy, than it is with aqueous solutions, to separate solvent and solutes in order to confine only radioactive material and limit the final waste flows. Starting from the initial block diagram devised two years ago, the paper shows how process integration studies were able to propose process fittings which lead to a reduction of the waste variety and flows leading at an 'ideal' new block diagram allowing internal solvent recycling, and self eliminating reactants. This new flowsheet minimizes the quantity of inactive inlet flows that would have inevitably to be incorporated in a final waste form. The study identifies all knowledge gaps to be filled and suggest some possible R and D issues to confirm or infirm the feasibility of the proposed process fittings. (authors)« less
24 CFR 236.1001 - Displacement, relocation, and acquisition.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Displacement, relocation, and... Assistance § 236.1001 Displacement, relocation, and acquisition. (a) Minimizing displacement. Consistent with... reasonable steps to minimize the displacement of persons (households, businesses, nonprofit organizations...
24 CFR 236.1001 - Displacement, relocation, and acquisition.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Displacement, relocation, and... Assistance § 236.1001 Displacement, relocation, and acquisition. (a) Minimizing displacement. Consistent with... reasonable steps to minimize the displacement of persons (households, businesses, nonprofit organizations...
24 CFR 236.1001 - Displacement, relocation, and acquisition.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Displacement, relocation, and... Assistance § 236.1001 Displacement, relocation, and acquisition. (a) Minimizing displacement. Consistent with... reasonable steps to minimize the displacement of persons (households, businesses, nonprofit organizations...
24 CFR 236.1001 - Displacement, relocation, and acquisition.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Displacement, relocation, and... Assistance § 236.1001 Displacement, relocation, and acquisition. (a) Minimizing displacement. Consistent with... reasonable steps to minimize the displacement of persons (households, businesses, nonprofit organizations...
24 CFR 236.1001 - Displacement, relocation, and acquisition.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Displacement, relocation, and... Assistance § 236.1001 Displacement, relocation, and acquisition. (a) Minimizing displacement. Consistent with... reasonable steps to minimize the displacement of persons (households, businesses, nonprofit organizations...
NASA Astrophysics Data System (ADS)
Jansen, H V; de Boer, M J; Unnikrishnan, S; Louwerse, M C; Elwenspoek, M C
2009-03-01
An intensive study has been performed to understand and tune deep reactive ion etch (DRIE) processes for optimum results with respect to the silicon etch rate, etch profile and mask etch selectivity (in order of priority) using state-of-the-art dual power source DRIE equipment. The research compares pulsed-mode DRIE processes (e.g. Bosch technique) and mixed-mode DRIE processes (e.g. cryostat technique). In both techniques, an inhibitor is added to fluorine-based plasma to achieve directional etching, which is formed out of an oxide-forming (O2) or a fluorocarbon (FC) gas (C4F8 or CHF3). The inhibitor can be introduced together with the etch gas, which is named a mixed-mode DRIE process, or the inhibitor can be added in a time-multiplexed manner, which will be termed a pulsed-mode DRIE process. Next, the most convenient mode of operation found in this study is highlighted including some remarks to ensure proper etching (i.e. step synchronization in pulsed-mode operation and heat control of the wafer). First of all, for the fabrication of directional profiles, pulsed-mode DRIE is far easier to handle, is more robust with respect to the pattern layout and has the potential of achieving much higher mask etch selectivity, whereas in a mixed-mode the etch rate is higher and sidewall scalloping is prohibited. It is found that both pulsed-mode CHF3 and C4F8 are perfectly suited to perform high speed directional etching, although they have the drawback of leaving the FC residue at the sidewalls of etched structures. They show an identical result when the flow of CHF3 is roughly 30 times the flow of C4F8, and the amount of gas needed for a comparable result decreases rapidly while lowering the temperature from room down to cryogenic (and increasing the etch rate). Moreover, lowering the temperature lowers the mask erosion rate substantially (and so the mask selectivity improves). The pulsed-mode O2 is FC-free but shows only tolerable anisotropic results at -120 °C. The downside of needing liquid nitrogen to perform cryogenic etching can be improved by using a new approach in which both the pulsed and mixed modes are combined into the so-called puffed mode. Alternatively, the use of tetra-ethyl-ortho-silicate (TEOS) as a silicon oxide precursor is proposed to enable sufficient inhibiting strength and improved profile control up to room temperature. Pulsed-mode processing, the second important aspect, is commonly performed in a cycle using two separate steps: etch and deposition. Sometimes, a three-step cycle is adopted using a separate step to clean the bottom of etching features. This study highlights an issue, known by the authors but not discussed before in the literature: the need for proper synchronization between gas and bias pulses to explore the benefit of three steps. The transport of gas from the mass flow controller towards the wafer takes time, whereas the application of bias to the wafer is relatively instantaneous. This delay causes a problem with respect to synchronization when decreasing the step time towards a value close to the gas residence time. It is proposed to upgrade the software with a delay time module for the bias pulses to be in pace with the gas pulses. If properly designed, the delay module makes it possible to switch on the bias exactly during the arrival of the gas for the bottom removal step and so it will minimize the ionic impact because now etch and deposition steps can be performed virtually without bias. This will increase the mask etch selectivity and lower the heat impact significantly. Moreover, the extra bottom removal step can be performed at (also synchronized!) low pressure and therefore opens a window for improved aspect ratios. The temperature control of the wafer, a third aspect of this study, at a higher etch rate and longer etch time, needs critical attention, because it drastically limits the DRIE performance. It is stressed that the exothermic reaction (high silicon loading) and ionic impact (due to metallic masks and/or exposed silicon) are the main sources of heat that might raise the wafer temperature uncontrollably, and they show the weakness of the helium backside technique using mechanical clamping. Electrostatic clamping, an alternative technique, should minimize this problem because it is less susceptible to heat transfer when its thermal resistance and the gap of the helium backside cavity are minimized; however, it is not a subject of the current study. Because oxygen-growth-based etch processes (due to their ultra thin inhibiting layer) rely more heavily on a constant wafer temperature than fluorocarbon-based processes, oxygen etches are more affected by temperature fluctuations and drifts during the etching. The fourth outcome of this review is a phenomenological model, which explains and predicts many features with respect to loading, flow and pressure behaviour in DRIE equipment including a diffusion zone. The model is a reshape of the flow model constructed by Mogab, who studied the loading effect in plasma etching. Despite the downside of needing a cryostat, it is shown that—when selecting proper conditions—a cryogenic two-step pulsed mode can be used as a successful technique to achieve high speed and selective plasma etching with an etch rate around 25 µm min-1 (<1% silicon load) with nearly vertical walls and resist etch selectivity beyond 1000. With the model in hand, it can be predicted that the etch rate can be doubled (50 µm min-1 at an efficiency of 33% for the fluorine generation from the SF6 feed gas) by minimizing the time the free radicals need to pass the diffusion zone. It is anticipated that this residence time can be reduced sufficiently by a proper inductive coupled plasma (ICP) source design (e.g. plasma shower head and concentrator). In order to preserve the correct profile at such high etch rates, the pressure during the bottom removal step should be minimized and, therefore, the synchronized three-step pulsed mode is believed to be essential to reach such high etch rates with sufficient profile control. In order to improve the etch rate even further, the ICP power should be enhanced; the upgrading of the turbopump seems not yet to be relevant because the throttle valve in the current study had to be used to restrict the turbo efficiency. In order to have a versatile list of state-of-the-art references, it has been decided to arrange it in subjects. The categories concerning plasma physics and applications are, for example, books, reviews, general topics, fluorine-based plasmas, plasma mixtures with oxygen at room temperature, wafer heat transfer and high aspect ratio trench (HART) etching. For readers 'new' to this field, it is advisable to study at least one (but rather more than one) of the reviews concerning plasma as found in the first 30 references. In many cases, a paper can be classified into more than one category. In such cases, the paper is directed to the subject most suited for the discussion of the current review. For example, many papers on heat transfer also treat cryogenic conditions and all the references dealing with highly anisotropic behaviour have been directed to the category HARTs. Additional pointers could get around this problem but have the disadvantage of creating a kind of written spaghetti. I hope that the adapted organization structure will help to have a quick look at and understanding of current developments in high aspect ratio plasma etching. Enjoy reading... Henri Jansen 18 June 2008
Nicolette, C A; Healey, D; Tcherepanova, I; Whelton, P; Monesmith, T; Coombs, L; Finke, L H; Whiteside, T; Miesowicz, F
2007-09-27
Dendritic cell (DC) active immunotherapy is potentially efficacious in a broad array of malignant disease settings. However, challenges remain in optimizing DC-based therapy for maximum clinical efficacy within manufacturing processes that permit quality control and scale-up of consistent products. In this review we discuss the critical issues that must be addressed in order to optimize DC-based product design and manufacture, and highlight the DC based platforms currently addressing these issues. Variables in DC-based product design include the type of antigenic payload used, DC maturation steps and activation processes, and functional assays. Issues to consider in development include: (a) minimizing the invasiveness of patient biological material collection; (b) minimizing handling and manipulations of tissue at the clinical site; (c) centralized product manufacturing and standardized processing and capacity for commercial-scale production; (d) rapid product release turnaround time; (e) the ability to manufacture sufficient product from limited starting material; and (f) standardized release criteria for DC phenotype and function. Improvements in the design and manufacture of DC products have resulted in a handful of promising leads currently in clinical development.
Cardador, Maria Jose; Gallego, Mercedes
2012-07-25
Chlorine solutions are usually used to sanitize fruit and vegetables in the fresh-cut industry due to their efficacy, low cost, and simple use. However, disinfection byproducts such as haloacetic acids (HAAs) can be formed during this process, which can remain on minimally processed vegetables (MPVs). These compounds are toxic and/or carcinogenic and have been associated with human health risks; therefore, the U.S. Environmental Protection Agency has set a maximum contaminant level for five HAAs at 60 μg/L in drinking water. This paper describes the first method to determine the nine HAAs that can be present in MPV samples, with static headspace coupled with gas chromatography-mass spectrometry where the leaching and derivatization of the HAAs are carried out in a single step. The proposed method is sensitive, with limits of detection between 0.1 and 2.4 μg/kg and an average relative standard deviation of ∼8%. From the samples analyzed, we can conclude that about 23% of them contain at least two HAAs (<0.4-24 μg/kg), which showed that these compounds are formed during washing and then remain on the final product.
Further Automate Planned Cluster Maintenance to Minimize System Downtime during Maintenance Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R.
This report documents the integration and testing of the automated update process of compute clusters in LC to minimize impact to user productivity. Description: A set of scripts will be written and deployed to further standardize cluster maintenance activities and minimize downtime during planned maintenance windows. Completion Criteria: When the scripts have been deployed and used during planned maintenance windows and a timing comparison is completed between the existing process and the new more automated process, this milestone is complete. This milestone was completed on Aug 23, 2016 on the new CTS1 cluster called Jade when a request to upgrademore » the version of TOSS 3 was initiated while SWL jobs and normal user jobs were running. Jobs that were running when the update to the system began continued to run to completion. New jobs on the cluster started on the new release of TOSS 3. No system administrator action was required. Current update procedures in TOSS 2 begin by killing all users jobs. Then all diskfull nodes are updated, which can take a few hours. Only after the updates are applied are all nodes are rebooted, and then finally put back into service. A system administrator is required for all steps. In terms of human time spent during a cluster OS update, the TOSS 3 automated procedure on Jade took 0 FTE hours. Doing the same update without the Toss Update Tool would have required 4 FTE hours.« less
Sipahi, Sevgi; Sasaki, Kirsten; Miller, Charles E
2017-08-01
The purpose of this review is to understand the minimally invasive approach to the excision and repair of an isthmocele. Previous small trials and case reports have shown that the minimally invasive approach by hysteroscopy and/or laparoscopy can cure symptoms of a uterine isthmocele, including abnormal bleeding, pelvic pain and secondary infertility. A recent larger prospective study has been published that evaluates outcomes of minimally invasive isthmocele repair. Smaller studies and individual case reports echo the positive results of this larger trial. The cesarean section scar defect, also known as an isthmocele, has become an important diagnosis for women who present with abnormal uterine bleeding, pelvic pain and secondary infertility. It is important for providers to be aware of the effective surgical treatment options for the symptomatic isthmocele. A minimally invasive approach, whether it be laparoscopic or hysteroscopic, has proven to be a safe and effective option in reducing symptoms and improving fertility. VIDEO ABSTRACT: http://links.lww.com/COOG/A37.
A multi-step system for screening and localization of hard exudates in retinal images
NASA Astrophysics Data System (ADS)
Bopardikar, Ajit S.; Bhola, Vishal; Raghavendra, B. S.; Narayanan, Rangavittal
2012-03-01
The number of people being affected by Diabetes mellitus worldwide is increasing at an alarming rate. Monitoring of the diabetic condition and its effects on the human body are therefore of great importance. Of particular interest is diabetic retinopathy (DR) which is a result of prolonged, unchecked diabetes and affects the visual system. DR is a leading cause of blindness throughout the world. At any point of time 25 - 44% of people with diabetes are afflicted by DR. Automation of the screening and monitoring process for DR is therefore essential for efficient utilization of healthcare resources and optimizing treatment of the affected individuals. Such automation would use retinal images and detect the presence of specific artifacts such as hard exudates, hemorrhages and soft exudates (that may appear in the image) to gauge the severity of DR. In this paper, we focus on the detection of hard exudates. We propose a two step system that consists of a screening step that classifies retinal images as normal or abnormal based on the presence of hard exudates and a detection stage that localizes these artifacts in an abnormal retinal image. The proposed screening step automatically detects the presence of hard exudates with a high sensitivity and positive predictive value (PPV ). The detection/localization step uses a k-means based clustering approach to localize hard exudates in the retinal image. Suitable feature vectors are chosen based on their ability to isolate hard exudates while minimizing false detections. The algorithm was tested on a benchmark dataset (DIARETDB1) and was seen to provide a superior performance compared to existing methods. The two-step process described in this paper can be embedded in a tele-ophthalmology system to aid with speedy detection and diagnosis of the severity of DR.
GENIE - Generation of computational geometry-grids for internal-external flow configurations
NASA Technical Reports Server (NTRS)
Soni, B. K.
1988-01-01
Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.
Method for producing damage resistant optics
Hackel, Lloyd A.; Burnham, Alan K.; Penetrante, Bernardino M.; Brusasco, Raymond M.; Wegner, Paul J.; Hrubesh, Lawrence W.; Kozlowski, Mark R.; Feit, Michael D.
2003-01-01
The present invention provides a system that mitigates the growth of surface damage in an optic. Damage to the optic is minimally initiated. In an embodiment of the invention, damage sites in the optic are initiated, located, and then treated to stop the growth of the damage sites. The step of initiating damage sites in the optic includes a scan of the optic using a laser to initiate defects. The exact positions of the initiated sites are identified. A mitigation process is performed that locally or globally removes the cause of subsequent growth of the damaged sites.
24 CFR 582.335 - Displacement, relocation, and real property acquisition.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Displacement, relocation, and real....335 Displacement, relocation, and real property acquisition. (a) Minimizing displacement. Consistent... reasonable steps to minimize the displacement of persons (families, individuals, businesses, nonprofit...
24 CFR 582.335 - Displacement, relocation, and real property acquisition.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Displacement, relocation, and real....335 Displacement, relocation, and real property acquisition. (a) Minimizing displacement. Consistent... reasonable steps to minimize the displacement of persons (families, individuals, businesses, nonprofit...
24 CFR 582.335 - Displacement, relocation, and real property acquisition.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Displacement, relocation, and real....335 Displacement, relocation, and real property acquisition. (a) Minimizing displacement. Consistent... reasonable steps to minimize the displacement of persons (families, individuals, businesses, nonprofit...
24 CFR 582.335 - Displacement, relocation, and real property acquisition.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Displacement, relocation, and real....335 Displacement, relocation, and real property acquisition. (a) Minimizing displacement. Consistent... reasonable steps to minimize the displacement of persons (families, individuals, businesses, nonprofit...
24 CFR 582.335 - Displacement, relocation, and real property acquisition.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Displacement, relocation, and real....335 Displacement, relocation, and real property acquisition. (a) Minimizing displacement. Consistent... reasonable steps to minimize the displacement of persons (families, individuals, businesses, nonprofit...
NASA Astrophysics Data System (ADS)
Utama, P. S.; Saputra, E.; Khairat
2018-04-01
Palm Oil Mill Fly Ash (POMFA) the solid waste of palm oil industry was used as a raw material for synthetic amorphous silica and carbon zeolite composite synthesis in order to minimize the wastes of palm oil industry. The alkaline extraction combine with the sol-gel precipitation and mechanical fragmentation was applied to produce synthetic amorphous silica. The byproduct, extracted POMFA was rich in carbon and silica content in a significant amount. The microwave heated hydrothermal process used to synthesize carbon zeolite composite from the byproduct. The obtained silica had chemical composition, specific surface area and the micrograph similar to commercial precipitated silica for rubber filler. The microwave heated hydrothermal process has a great potential for synthesizing carbon zeolite composite. The process only needs one-step and shorter time compare to conventional hydrothermal process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinick, Charles; Riccobono, Antonino, MS; Messing, Charles G., Ph.D.
Dehlsen Associates, LLC was awarded a grant by the United States Department of Energy (DOE) Golden Field Office for a project titled 'Siting Study Framework and Survey Methodology for Marine and Hydrokinetic Energy Project in Offshore Southeast Florida,' corresponding to DOE Grant Award Number DE-EE0002655 resulting from DOE funding Opportunity Announcement Number DE-FOA-0000069 for Topic Area 2, and it is referred to herein as 'the project.' The purpose of the project was to enhance the certainty of the survey requirements and regulatory review processes for the purpose of reducing the time, efforts, and costs associated with initial siting efforts ofmore » marine and hydrokinetic energy conversion facilities that may be proposed in the Atlantic Ocean offshore Southeast Florida. To secure early input from agencies, protocols were developed for collecting baseline geophysical information and benthic habitat data that can be used by project developers and regulators to make decisions early in the process of determining project location (i.e., the siting process) that avoid or minimize adverse impacts to sensitive marine benthic habitat. It is presumed that such an approach will help facilitate the licensing process for hydrokinetic and other ocean renewable energy projects within the study area and will assist in clarifying the baseline environmental data requirements described in the U.S. Department of the Interior Bureau of Ocean Energy Management, Regulation and Enforcement (formerly Minerals Management Service) final regulations on offshore renewable energy (30 Code of Federal Regulations 285, published April 29, 2009). Because projects generally seek to avoid or minimize impacts to sensitive marine habitats, it was not the intent of this project to investigate areas that did not appear suitable for the siting of ocean renewable energy projects. Rather, a two-tiered approach was designed with the first step consisting of gaining overall insight about seabed conditions offshore southeastern Florida by conducting a geophysical survey of pre-selected areas with subsequent post-processing and expert data interpretation by geophysicists and experienced marine biologists knowledgeable about the general project area. The second step sought to validate the benthic habitat types interpreted from the geophysical data by conducting benthic video and photographic field surveys of selected habitat types. The goal of this step was to determine the degree of correlation between the habitat types interpreted from the geophysical data and what actually exists on the seafloor based on the benthic video survey logs. This step included spot-checking selected habitat types rather than comprehensive evaluation of the entire area covered by the geophysical survey. It is important to note that non-invasive survey methods were used as part of this study and no devices of any kind were either temporarily or permanently attached to the seabed as part of the work conducted under this project.« less
Mechanochemical Symmetry Breaking in Hydra Aggregates
Mercker, Moritz; Köthe, Alexandra; Marciniak-Czochra, Anna
2015-01-01
Tissue morphogenesis comprises the self-organized creation of various patterns and shapes. Although detailed underlying mechanisms are still elusive in many cases, an increasing amount of experimental data suggests that chemical morphogen and mechanical processes are strongly coupled. Here, we develop and test a minimal model of the axis-defining step (i.e., symmetry breaking) in aggregates of the Hydra polyp. Based on previous findings, we combine osmotically driven shape oscillations with tissue mechanics and morphogen dynamics. We show that the model incorporating a simple feedback loop between morphogen patterning and tissue stretch reproduces a wide range of experimental data. Finally, we compare different hypothetical morphogen patterning mechanisms (Turing, tissue-curvature, and self-organized criticality). Our results suggest the experimental investigation of bigger (i.e., multiple head) aggregates as a key step for a deeper understanding of mechanochemical symmetry breaking in Hydra. PMID:25954896
Maximizing the efficiency of multienzyme process by stoichiometry optimization.
Dvorak, Pavel; Kurumbang, Nagendra P; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri
2014-09-05
Multienzyme processes represent an important area of biocatalysis. Their efficiency can be enhanced by optimization of the stoichiometry of the biocatalysts. Here we present a workflow for maximizing the efficiency of a three-enzyme system catalyzing a five-step chemical conversion. Kinetic models of pathways with wild-type or engineered enzymes were built, and the enzyme stoichiometry of each pathway was optimized. Mathematical modeling and one-pot multienzyme experiments provided detailed insights into pathway dynamics, enabled the selection of a suitable engineered enzyme, and afforded high efficiency while minimizing biocatalyst loadings. Optimizing the stoichiometry in a pathway with an engineered enzyme reduced the total biocatalyst load by an impressive 56 %. Our new workflow represents a broadly applicable strategy for optimizing multienzyme processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Decision support for operations and maintenance (DSOM) system
Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA
2006-03-21
A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.
Frankel, Edwin; Bakhouche, Abdelhakim; Lozano-Sánchez, Jesús; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto
2013-06-05
This review describes the olive oil production process to obtain extra virgin olive oil (EVOO) enriched in polyphenol and byproducts generated as sources of antioxidants. EVOO is obtained exclusively by mechanical and physical processes including collecting, washing, and crushing of olives, malaxation of olive paste, centrifugation, storage, and filtration. The effect of each step is discussed to minimize losses of polyphenols from large quantities of wastes. Phenolic compounds including phenolic acids, alcohols, secoiridoids, lignans, and flavonoids are characterized in olive oil mill wastewater, olive pomace, storage byproducts, and filter cake. Different industrial pilot plant processes are developed to recover phenolic compounds from olive oil byproducts with antioxidant and bioactive properties. The technological information compiled in this review will help olive oil producers to improve EVOO quality and establish new processes to obtain valuable extracts enriched in polyphenols from byproducts with food ingredient applications.
DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR THE BENCH STEAM REFORMER TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
BANNING DL
2010-08-03
This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.« less
Evaluation of Second-Level Inference in fMRI Analysis
Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs
2016-01-01
We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578
24 CFR 576.408 - Displacement, relocation, and acquisition.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Displacement, relocation, and... § 576.408 Displacement, relocation, and acquisition. (a) Minimizing displacement. Consistent with the... assure that they have taken all reasonable steps to minimize the displacement of persons (families...
24 CFR 576.408 - Displacement, relocation, and acquisition.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Displacement, relocation, and... § 576.408 Displacement, relocation, and acquisition. (a) Minimizing displacement. Consistent with the... assure that they have taken all reasonable steps to minimize the displacement of persons (families...
24 CFR 576.408 - Displacement, relocation, and acquisition.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Displacement, relocation, and... § 576.408 Displacement, relocation, and acquisition. (a) Minimizing displacement. Consistent with the... assure that they have taken all reasonable steps to minimize the displacement of persons (families...
Choi, Bernard C K; Pak, Anita W P; Choi, Jerome C L; Choi, Elaine C L
2007-01-01
Health experts recommend daily step goals of 10,000 steps for adults and 12,000 steps for youths to achieve a healthy active living. This article reports the findings of a Canadian family project to investigate whether the recommended daily step goals are achievable in a real life setting, and suggests ways to increase the daily steps to meet the goal. The family project also provides an example to encourage more Canadians to conduct family projects on healthy living. This is a pilot feasibility study. A Canadian family was recruited for the study, with 4 volunteers (father, mother, son and daughter). Each volunteer was asked to wear a pedometer and to record daily steps for three time periods of each day during a 2-month period. Both minimal routine steps, and additional steps from special non-routine activities, were recorded at work, school and home. The mean number of daily steps from routine minimal daily activities for the family was 6685 steps in a day (16 hr, approx 400 steps/hr). There was thus a mean deficit of 4315 steps per day, or approximately 30,000 steps per week, from the goal (10,000 steps for adults; 12,000 steps for youths). Special activities that were found to effectively increase the steps above the routine level include: walking at brisk pace, grocery shopping, window shopping in a mall, going to an entertainment centre, and attending parties (such as to celebrate the holiday season and birthdays). To increase our daily steps to meet the daily step goal, a new culture is recommended: "get off the chair". By definition, sitting on a chair precludes the opportunity to walk. We encourage people to get off the chair, to go shopping, and to go partying, as a practical and fun way to increase the daily steps. This paper is a call for increased physical activity to meet the daily step goal.
Aroma recovery from roasted coffee by wet grinding.
Baggenstoss, J; Thomann, D; Perren, R; Escher, F
2010-01-01
Aroma recovery as determined by solid phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) was compared in coffees resulting from conventional grinding processes, and from wet grinding with cold and hot water. Freshly roasted coffee as well as old, completely degassed coffee was ground in order to estimate the relationship of internal carbon dioxide pressure in freshly roasted coffee with the aroma loss during grinding. The release of volatile aroma substances during grinding was found to be related to the internal carbon dioxide pressure, and wet grinding with cold water was shown to minimize losses of aroma compounds by trapping them in water. Due to the high solubility of roasted coffee in water, the use of wet-grinding equipment is limited to processes where grinding is followed by an extraction step. Combining grinding and extraction by the use of hot water for wet grinding resulted in considerable losses of aroma compounds because of the prolonged heat impact. Therefore, a more promising two-step process involving cold wet grinding and subsequent hot extraction in a closed system was introduced. The yield of aroma compounds in the resulting coffee was substantially higher compared to conventionally ground coffee. © 2010 Institute of Food Technologists®
DebriSat Fragment Characterization System and Processing Status
NASA Technical Reports Server (NTRS)
Rivero, M.; Shiotani, B.; M. Carrasquilla; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.
2016-01-01
The DebriSat project is a continuing effort sponsored by NASA and DoD to update existing break-up models using data obtained from hypervelocity impact tests performed to simulate on-orbit collisions. After the impact tests, a team at the University of Florida has been working to characterize the fragments in terms of their mass, size, shape, color and material content. The focus of the post-impact effort has been the collection of 2 mm and larger fragments resulting from the hypervelocity impact test. To date, in excess of 125K fragments have been recovered which is approximately 40K more than the 85K fragments predicted by the existing models. While the fragment collection activities continue, there has been a transition to the characterization of the recovered fragments. Since the start of the characterization effort, the focus has been on the use of automation to (i) expedite the fragment characterization process and (ii) minimize the effects of human subjectivity on the results; e.g., automated data entry processes were developed and implemented to minimize errors during transcription of the measurement data. At all steps of the process, however, there is human oversight to ensure the integrity of the data. Additionally, repeatability and reproducibility tests have been developed and implemented to ensure that the instrumentations used in the characterization process are accurate and properly calibrated.
NASA Astrophysics Data System (ADS)
Lam, Simon K. H.
2017-09-01
A promising direction to improve the sensitivity of a SQUID is to increase its junction's normal resistance value, Rn, as the SQUID modulation voltage scales linearly with Rn. As a first step to develop highly sensitive single layer SQUID, submicron scale YBCO grain boundary step edge junctions and SQUIDs with large Rn were fabricated and studied. The step-edge junctions were reduced to submicron scale to increase their Rn values using focus ion beam, FIB and the measurement of transport properties were performed from 4.3 to 77 K. The FIB induced deposition layer proves to be effective to minimize the Ga ion contamination during the FIB milling process. The critical current-normal resistance value of submicron junction at 4.3 K was found to be 1-3 mV, comparable to the value of the same type of junction in micron scale. The submicron junction Rn value is in the range of 35-100 Ω, resulting a large SQUID modulation voltage in a wide temperature range. This performance promotes further investigation of cryogen-free, high field sensitivity SQUID applications at medium low temperature, e.g. at 40-60 K.
Minimization of power consumption during charging of superconducting accelerating cavities
NASA Astrophysics Data System (ADS)
Bhattacharyya, Anirban Krishna; Ziemann, Volker; Ruber, Roger; Goryashko, Vitaliy
2015-11-01
The radio frequency cavities, used to accelerate charged particle beams, need to be charged to their nominal voltage after which the beam can be injected into them. The standard procedure for such cavity filling is to use a step charging profile. However, during initial stages of such a filling process a substantial amount of the total energy is wasted in reflection for superconducting cavities because of their extremely narrow bandwidth. The paper presents a novel strategy to charge cavities, which reduces total energy reflection. We use variational calculus to obtain analytical expression for the optimal charging profile. Energies, reflected and required, and generator peak power are also compared between the charging schemes and practical aspects (saturation, efficiency and gain characteristics) of power sources (tetrodes, IOTs and solid state power amplifiers) are also considered and analysed. The paper presents a methodology to successfully identify the optimal charging scheme for different power sources to minimize total energy requirement.
Human error mitigation initiative (HEMI) : summary report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.
2004-11-01
Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less
Machine learning in motion control
NASA Technical Reports Server (NTRS)
Su, Renjeng; Kermiche, Noureddine
1989-01-01
The existing methodologies for robot programming originate primarily from robotic applications to manufacturing, where uncertainties of the robots and their task environment may be minimized by repeated off-line modeling and identification. In space application of robots, however, a higher degree of automation is required for robot programming because of the desire of minimizing the human intervention. We discuss a new paradigm of robotic programming which is based on the concept of machine learning. The goal is to let robots practice tasks by themselves and the operational data are used to automatically improve their motion performance. The underlying mathematical problem is to solve the problem of dynamical inverse by iterative methods. One of the key questions is how to ensure the convergence of the iterative process. There have been a few small steps taken into this important approach to robot programming. We give a representative result on the convergence problem.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Sahni, Ekneet K; Pikal, Michael J
2017-03-01
Although several mathematical models of primary drying have been developed over the years, with significant impact on the efficiency of process design, models of secondary drying have been confined to highly complex models. The simple-to-use Excel-based model developed here is, in essence, a series of steady state calculations of heat and mass transfer in the 2 halves of the dry layer where drying time is divided into a large number of time steps, where in each time step steady state conditions prevail. Water desorption isotherm and mass transfer coefficient data are required. We use the Excel "Solver" to estimate the parameters that define the mass transfer coefficient by minimizing the deviations in water content between calculation and a calibration drying experiment. This tool allows the user to input the parameters specific to the product, process, container, and equipment. Temporal variations in average moisture contents and product temperatures are outputs and are compared with experiment. We observe good agreement between experiments and calculations, generally well within experimental error, for sucrose at various concentrations, temperatures, and ice nucleation temperatures. We conclude that this model can serve as an important process development tool for process design and manufacturing problem-solving. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Overview of the production of sintered SiC optics and optical sub-assemblies
NASA Astrophysics Data System (ADS)
Williams, S.; Deny, P.
2005-08-01
The following is an overview on sintered silicon carbide (SSiC) material properties and processing requirements for the manufacturing of components for advanced technology optical systems. The overview will compare SSiC material properties to typical materials used for optics and optical structures. In addition, it will review manufacturing processes required to produce optical components in detail by process step. The process overview will illustrate current manufacturing process and concepts to expand the process size capability. The overview will include information on the substantial capital equipment employed in the manufacturing of SSIC. This paper will also review common in-process inspection methodology and design rules. The design rules are used to improve production yield, minimize cost, and maximize the inherent benefits of SSiC for optical systems. Optimizing optical system designs for a SSiC manufacturing process will allow systems designers to utilize SSiC as a low risk, cost competitive, and fast cycle time technology for next generation optical systems.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.
Gong, Changcheng; Cai, Yufang; Zeng, Li
2018-01-01
For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.
Schipper, Harvey
2016-02-01
Carter is a bellwether decision, an adjudication on a narrow point of law whose implications are vast across society, and whose impact may not be realized for years. Coupled with Quebec's Act Respecting End-of-life Care it has sharply changed the legal landscape with respect to actively ending a person's life. "Medically assisted dying" will be permitted under circumstances, and through processes, which have yet to be operationally defined. This decision carries with it moral assumptions, which mean that it will be difficult to reach a unifying consensus. For some, the decision and Act reflect a modern acknowledgement of individual autonomy. For others, allowing such acts is morally unspeakable. Having opened the Pandora's Box, the question becomes one of navigating a tolerable societal path. I believe it is possible to achieve a workable solution based on the core principle that "medically assisted dying" should be a very rarely employed last option, subject to transparent ongoing review, specifically as to why it was deemed necessary. My analysis is based on 1. The societal conditions in which have fostered demand for "assisted dying", 2. Actions in other jurisdictions, 3. Carter and Quebec Bill 52, 4. Political considerations, 5. Current medical practice. Leading to a series of recommendations regarding. 1. Legislation and regulation, 2. The role of professional regulatory agencies, 3. Medical professions education and practice, 4. Public education, 5. Health care delivery and palliative care. Given the burden of public opinion, and the legal steps already taken, a process for assisted-dying is required. However, those legal and regulatory steps should only be considered a necessary and defensive first step in a two stage process. The larger goal, the second step, is to drive the improvement of care, and thus minimize assisted-dying.
NASA Astrophysics Data System (ADS)
Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group
2003-04-01
The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.
Synthesis of Platinum-nickel Nanowires and Optimization for Oxygen Reduction Performance.
Alia, Shaun M; Pivovar, Bryan S
2018-04-27
Platinum-nickel (Pt-Ni) nanowires were developed as fuel cell electrocatalysts, and were optimized for the performance and durability in the oxygen reduction reaction. Spontaneous galvanic displacement was used to deposit Pt layers onto Ni nanowire substrates. The synthesis approach produced catalysts with high specific activities and high Pt surface areas. Hydrogen annealing improved Pt and Ni mixing and specific activity. Acid leaching was used to preferentially remove Ni near the nanowire surface, and oxygen annealing was used to stabilize near-surface Ni, improving durability and minimizing Ni dissolution. These protocols detail the optimization of each post-synthesis processing step, including hydrogen annealing to 250 °C, exposure to 0.1 M nitric acid, and oxygen annealing to 175 °C. Through these steps, Pt-Ni nanowires produced increased activities more than an order of magnitude than Pt nanoparticles, while offering significant durability improvements. The presented protocols are based on Pt-Ni systems in the development of fuel cell catalysts. These techniques have also been used for a variety of metal combinations, and can be applied to develop catalysts for a number of electrochemical processes.
Bhakta, Samir A.; Evans, Elizabeth; Benavidez, Tomás E.; Garcia, Carlos D.
2014-01-01
An important consideration for the development of biosensors is the adsorption of the bio recognition element to the surface of a substrate. As the first step in the immobilization process, adsorption affects most immobilization routes and much attention is given into the research of this process to maximize the overall activity of the bio sensor. The use of nanomaterials, specifically nanoparticles and nanostructured films, offers advantageous properties that can be fine-tuned for interaction with specific proteins to maximize activity, minimize structural changes, and enhance the catalytic step. In the biosensor field, protein-nanomaterial interactions are an emerging trend that span across many disciplines. This review addresses recent publications about the proteins most frequently used, their most relevant characteristics, and the conditions required to adsorb them to nanomaterials. When relevant and available, subsequent analytical figures of merits are discussed for selected biosensors. The general trend amongst the research papers allows concluding that the use of nanomaterials has already provided significant improvements in the analytical performance of many biosensors and that this research field will continue to grow. PMID:25892065
Interactions of double patterning technology with wafer processing, OPC and design flows
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Cork, Chris; Miloslavsky, Alex; Luk-Pat, Gerry; Barnes, Levi; Hapli, John; Lewellen, John; Rollins, Greg; Wiaux, Vincent; Verhaegen, Staf
2008-03-01
Double patterning technology (DPT) is one of the main options for printing logic devices with half-pitch less than 45nm; and flash and DRAM memory devices with half-pitch less than 40nm. DPT methods decompose the original design intent into two individual masking layers which are each patterned using single exposures and existing 193nm lithography tools. The results of the individual patterning layers combine to re-create the design intent pattern on the wafer. In this paper we study interactions of DPT with lithography, masks synthesis and physical design flows. Double exposure and etch patterning steps create complexity for both process and design flows. DPT decomposition is a critical software step which will be performed in physical design and also in mask synthesis. Decomposition includes cutting (splitting) of original design intent polygons into multiple polygons where required; and coloring of the resulting polygons. We evaluate the ability to meet key physical design goals such as: reduce circuit area; minimize rework; ensure DPT compliance; guarantee patterning robustness on individual layer targets; ensure symmetric wafer results; and create uniform wafer density for the individual patterning layers.
The Tunneling Method for Global Optimization in Multidimensional Scaling.
ERIC Educational Resources Information Center
Groenen, Patrick J. F.; Heiser, Willem J.
1996-01-01
A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)
Grammar and the Lexicon. Working Papers in Linguistics 16.
ERIC Educational Resources Information Center
University of Trondheim Working Papers in Linguistics, 1993
1993-01-01
In this volume, five working papers are presented. "Minimal Signs and Grammar" (Lars Hellan) proposes that a significant part of the "production" of grammar is incremental, building larger and larger constructs, with lexical objects called minimal signs as the first steps. It also suggests that the basic lexical information in…
Watson, J M; Crosby, H; Dale, V M; Tober, G; Wu, Q; Lang, J; McGovern, R; Newbury-Birch, D; Parrott, S; Bland, J M; Drummond, C; Godfrey, C; Kaner, E; Coulton, S
2013-06-01
There is clear evidence of the detrimental impact of hazardous alcohol consumption on the physical and mental health of the population. Estimates suggest that hazardous alcohol consumption annually accounts for 150,000 hospital admissions and between 15,000 and 22,000 deaths in the UK. In the older population, hazardous alcohol consumption is associated with a wide range of physical, psychological and social problems. There is evidence of an association between increased alcohol consumption and increased risk of coronary heart disease, hypertension and haemorrhagic and ischaemic stroke, increased rates of alcohol-related liver disease and increased risk of a range of cancers. Alcohol is identified as one of the three main risk factors for falls. Excessive alcohol consumption in older age can also contribute to the onset of dementia and other age-related cognitive deficits and is implicated in one-third of all suicides in the older population. To compare the clinical effectiveness and cost-effectiveness of a stepped care intervention against a minimal intervention in the treatment of older hazardous alcohol users in primary care. A multicentre, pragmatic, two-armed randomised controlled trial with an economic evaluation. General practices in primary care in England and Scotland between April 2008 and October 2010. Adults aged ≥ 55 years scoring ≥ 8 on the Alcohol Use Disorders Identification Test (10-item) (AUDIT) were eligible. In total, 529 patients were randomised in the study. The minimal intervention group received a 5-minute brief advice intervention with the practice or research nurse involving feedback of the screening results and discussion regarding the health consequences of continued hazardous alcohol consumption. Those in the stepped care arm initially received a 20-minute session of behavioural change counselling, with referral to step 2 (motivational enhancement therapy) and step 3 (local specialist alcohol services) if indicated. Sessions were recorded and rated to ensure treatment fidelity. The primary outcome was average drinks per day (ADD) derived from extended AUDIT--Consumption (3-item) (AUDIT-C) at 12 months. Secondary outcomes were AUDIT-C score at 6 and 12 months; alcohol-related problems assessed using the Drinking Problems Index (DPI) at 6 and 12 months; health-related quality of life assessed using the Short Form Questionnaire-12 items (SF-12) at 6 and 12 months; ADD at 6 months; quality-adjusted life-years (QALYs) (for cost-utility analysis derived from European Quality of Life-5 Dimensions); and health and social care resource use associated with the two groups. Both groups reduced alcohol consumption between baseline and 12 months. The difference between groups in log-transformed ADD at 12 months was very small, at 0.025 [95% confidence interval (CI)--0.060 to 0.119], and not statistically significant. At month 6 the stepped care group had a lower ADD, but again the difference was not statistically significant. At months 6 and 12, the stepped care group had a lower DPI score, but this difference was not statistically significant at the 5% level. The stepped care group had a lower SF-12 mental component score and lower physical component score at month 6 and month 12, but these differences were not statistically significant at the 5% level. The overall average cost per patient, taking into account health and social care resource use, was £488 [standard deviation (SD) £826] in the stepped care group and £482 (SD £826) in the minimal intervention group at month 6. The mean QALY gains were slightly greater in the stepped care group than in the minimal intervention group, with a mean difference of 0.0058 (95% CI -0.0018 to 0.0133), generating an incremental cost-effectiveness ratio (ICER) of £1100 per QALY gained. At month 12, participants in the stepped care group incurred fewer costs, with a mean difference of -£194 (95% CI -£585 to £198), and had gained 0.0117 more QALYs (95% CI -0.0084 to 0.0318) than the control group. Therefore, from an economic perspective the minimal intervention was dominated by stepped care but, as would be expected given the effectiveness results, the difference was small and not statistically significant. Stepped care does not confer an advantage over minimal intervention in terms of reduction in alcohol consumption at 12 months post intervention when compared with a 5-minute brief (minimal) intervention. This trial is registered as ISRCTN52557360. This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 17, No. 25. See the HTA programme website for further project information.
Zaheer, Khalid; Humayoun Akhtar, M
2017-04-13
Isoflavones (genistein, daidzein, and glycitein) are bioactive compounds with mildly estrogenic properties and often referred to as phytoestrogen. These are present in significant quantities (up to 4-5 mg·g -1 on dry basis) in legumes mainly soybeans, green beans, mung beans. In grains (raw materials) they are present mostly as glycosides, which are poorly absorbed on consumption. Thus, soybeans are processed into various food products for digestibility, taste and bioavailability of nutrients and bioactives. Main processing steps include steaming, cooking, roasting, microbial fermentation that destroy protease inhibitors and also cleaves the glycoside bond to yield absorbable aglycone in the processed soy products, such as miso, natto, soy milk, tofu; and increase shelf lives. Processed soy food products have been an integral part of regular diets in many Asia-Pacific countries for centuries, e.g. China, Japan and Korea. However, in the last two decades, there have been concerted efforts to introduce soy products in western diets for their health benefits with some success. Isoflavones were hailed as magical natural component that attribute to prevent some major prevailing health concerns. Consumption of soy products have been linked to reduction in incidence or severity of chronic diseases such as cardiovascular, breast and prostate cancers, menopausal symptoms, bone loss, etc. Overall, consuming moderate amounts of traditionally prepared and minimally processed soy foods may offer modest health benefits while minimizing potential for any adverse health effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Geoffrey Wayne
2016-03-16
This document identifies scope and some general procedural steps for performing Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing. This Test Plan describes the requirements, responsibilities, and process for preparing and testing a range of chemical surrogates intended to mimic the energetic response of waste created during processing of legacy nitrate salts. The surrogates developed are expected to bound1 the thermal and mechanical sensitivity of such waste, allowing for the development of process parameters required to minimize the risk to worker and public when processing this waste. Such parameters will be based on the worst-case kinetic parameters as derived frommore » APTAC measurements as well as the development of controls to mitigate sensitivities that may exist due to friction, impact, and spark. This Test Plan will define the scope and technical approach for activities that implement Quality Assurance requirements relevant to formulation and testing.« less
Oligosaccharide formation during commercial pear juice processing.
Willems, Jamie L; Low, Nicholas H
2016-08-01
The effect of enzyme treatment and processing on the oligosaccharide profile of commercial pear juice samples was examined by high performance anion exchange chromatography with pulsed amperometric detection and capillary gas chromatography with flame ionization detection. Industrial samples representing the major stages of processing produced with various commercial enzyme preparations were studied. Through the use of commercially available standards and laboratory scale enzymatic hydrolysis of pectin, starch and xyloglucan; galacturonic acid oligomers, glucose oligomers (e.g., maltose and cellotriose) and isoprimeverose were identified as being formed during pear juice production. It was found that the majority of polysaccharide hydrolysis and oligosaccharide formation occurred during enzymatic treatment at the pear mashing stage and that the remaining processing steps had minimal impact on the carbohydrate-based chromatographic profile of pear juice. Also, all commercial enzyme preparations and conditions (time and temperature) studied produced similar carbohydrate-based chromatographic profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Practical Model for Forecasting New Freshman Enrollment during the Application Period.
ERIC Educational Resources Information Center
Paulsen, Michael B.
1989-01-01
A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)
A biconjugate gradient type algorithm on massively parallel architectures
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Hochbruck, Marlis
1991-01-01
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.
Höll, S; Haupt, M; Fischer, U H P
2013-06-20
Optical simulation software based on the ray-tracing method offers easy and fast results in imaging optics. This method can also be applied in other fields of light propagation. For short distance communications, polymer optical fibers (POFs) are gradually gaining importance. This kind of fiber offers a larger core diameter, e.g., the step index POF features a core diameter of 980 μm. Consequently, POFs have a large number of modes (>3 million modes) in the visible range, and ray tracing could be used to simulate the propagation of light. This simulation method is applicable not only for the fiber itself but also for the key components of a complete POF network, e.g., couplers or other key elements of the transmission line. In this paper a demultiplexer designed and developed by means of ray tracing is presented. Compared to the classical optical design, requirements for optimal design differ particularly with regard to minimizing the insertion loss (IL). The basis of the presented key element is a WDM device using a Rowland spectrometer setup. In this approach the input fiber carries multiple wavelengths, which will be divided into multiple output fibers that transmit only one wavelength. To adapt the basic setup to POF, the guidance of light in this element has to be changed fundamentally. Here, a monolithic approach is presented with a blazed grating using an aspheric mirror to minimize most of the aberrations. In the simulations the POF is represented by an area light source, while the grating is analyzed for different orders and the highest possible efficiency. In general, the element should be designed in a way that it can be produced with a mass production technology like injection molding in order to offer a reasonable price. However, designing the elements with regard to injection molding leads to some inherent challenges. The microstructure of an optical grating and the thick-walled 3D molded parts both result in high demands on the injection molding process. This also requires complex machining of the molding tool. Therefore, different experiments are done to optimize the process parameter, find the best molding material, and find a suitable machining method for the molding tool. The paper will describe the development of the demultiplexer by means of ray-tracing simulations step by step. Also, the process steps and the realized solutions for the injection molding are described.
A method for real-time generation of augmented reality work instructions via expert movements
NASA Astrophysics Data System (ADS)
Bhattacharya, Bhaskar; Winer, Eliot
2015-03-01
Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method's viability.
A bioreactor system for the nitrogen loop in a Controlled Ecological Life Support System
NASA Technical Reports Server (NTRS)
Saulmon, M. M.; Reardon, K. F.; Sadeh, W. Z.
1996-01-01
As space missions become longer in duration, the need to recycle waste into useful compounds rises dramatically. This problem can be addressed by the development of Controlled Ecological Life Support Systems (CELSS) (i.e., Engineered Closed/Controlled Eco-Systems (ECCES)), consisting of human and plant modules. One of the waste streams leaving the human module is urine. In addition to the reclamation of water from urine, recovery of the nitrogen is important because it is an essential nutrient for the plant module. A 3-step biological process for the recycling of nitrogenous waste (urea) is proposed. A packed-bed bioreactor system for this purpose was modeled, and the issues of reaction step segregation, reactor type and volume, support particle size, and pressure drop were addressed. Based on minimization of volume, a bioreactor system consisting of a plug flow immobilized urease reactor, a completely mixed flow immobilized cell reactor to convert ammonia to nitrite, and a plug flow immobilized cell reactor to produce nitrate from nitrite is recommended. It is apparent that this 3-step bioprocess meets the requirements for space applications.
Raijmakers, R; de Witte, T; Koekman, E; Wessels, J; Haanen, C
1986-01-01
Isopycnic density floatation centrifugation has been proven to be a suitable technique to enrich bone marrow aspirates for clonogenic cells on a small scale. We have tested a Haemonetics semicontinuous blood cell separator in order to process large volumes of bone marrow with minimal bone marrow manipulation. The efficacy of isopycnic density floatation was tested in a one and a two-step procedure. Both procedures showed a recovery of about 20% of the nucleated cells and 1-2% of the erythrocytes. The enrichment of clonogenic cells in the one-step procedure appeared superior to the two-step enrichment, first separating buffy coat cells. The recovery of clonogenic cells was 70 and 50%, respectively. Repopulation capacity of the low-density cell fraction containing the clonogenic cells was excellent after autologous reinfusion (6 cases) and allogeneic bone marrow transplantation (3 cases). Fast enrichment of large volumes of bone marrow aspirates with low-density cells containing the clonogenic cells by isopycnic density floatation centrifugation can be done safely using a Haemonetics blood cell separator.
Applications of Multi-Body Dynamical Environments: The ARTEMIS Transfer Trajectory Design
NASA Technical Reports Server (NTRS)
Folta, David C.; Woodard, Mark; Howell, Kathleen; Patterson, Chris; Schlei, Wayne
2010-01-01
The application of forces in multi-body dynamical environments to pennit the transfer of spacecraft from Earth orbit to Sun-Earth weak stability regions and then return to the Earth-Moon libration (L1 and L2) orbits has been successfully accomplished for the first time. This demonstrated transfer is a positive step in the realization of a design process that can be used to transfer spacecraft with minimal Delta-V expenditures. Initialized using gravity assists to overcome fuel constraints; the ARTEMIS trajectory design has successfully placed two spacecraft into EarthMoon libration orbits by means of these applications.
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David
2017-04-01
We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
High-density patterned media fabrication using jet and flash imprint lithography
NASA Astrophysics Data System (ADS)
Ye, Zhengmao; Ramos, Rick; Brooks, Cynthia; Simpson, Logan; Fretwell, John; Carden, Scott; Hellebrekers, Paul; LaBrake, Dwayne; Resnick, Douglas J.; Sreenivasan, S. V.
2011-04-01
The Jet and Flash Imprint Lithography (J-FIL®) process uses drop dispensing of UV curable resists for high resolution patterning. Several applications, including patterned media, are better, and more economically served by a full substrate patterning process since the alignment requirements are minimal. Patterned media is particularly challenging because of the aggressive feature sizes necessary to achieve storage densities required for manufacturing beyond the current technology of perpendicular recording. In this paper, the key process steps for the application of J-FIL to pattern media fabrication are reviewed with special attention to substrate cleaning, vapor adhesion of the adhesion layer and imprint performance at >300 disk per hour. Also discussed are recent results for imprinting discrete track patterns at half pitches of 24nm and bit patterned media patterns at densities of 1 Tb/in2.
A generalized framework for nucleosynthesis calculations
NASA Astrophysics Data System (ADS)
Sprouse, Trevor; Mumpower, Matthew; Aprahamian, Ani
2014-09-01
Simulating astrophysical events is a difficult process, requiring a detailed pairing of knowledge from both astrophysics and nuclear physics. Astrophysics guides the thermodynamic evolution of an astrophysical event. We present a nucleosynthesis framework written in Fortran that combines as inputs a thermodynamic evolution and nuclear data to time evolve the abundances of nuclear species. Through our coding practices, we have emphasized the applicability of our framework to any astrophysical event, including those involving nuclear fission. Because these calculations are often very complicated, our framework dynamically optimizes itself based on the conditions at each time step in order to greatly minimize total computation time. To highlight the power of this new approach, we demonstrate the use of our framework to simulate both Big Bang nucleosynthesis and r-process nucleosynthesis with speeds competitive with current solutions dedicated to either process alone.
Wei Liao; Rohr, Karl; Chang-Ki Kang; Zang-Hee Cho; Worz, Stefan
2016-01-01
We propose a novel hybrid approach for automatic 3D segmentation and quantification of high-resolution 7 Tesla magnetic resonance angiography (MRA) images of the human cerebral vasculature. Our approach consists of two main steps. First, a 3D model-based approach is used to segment and quantify thick vessels and most parts of thin vessels. Second, remaining vessel gaps of the first step in low-contrast and noisy regions are completed using a 3D minimal path approach, which exploits directional information. We present two novel minimal path approaches. The first is an explicit approach based on energy minimization using probabilistic sampling, and the second is an implicit approach based on fast marching with anisotropic directional prior. We conducted an extensive evaluation with over 2300 3D synthetic images and 40 real 3D 7 Tesla MRA images. Quantitative and qualitative evaluation shows that our approach achieves superior results compared with a previous minimal path approach. Furthermore, our approach was successfully used in two clinical studies on stroke and vascular dementia.
Computer Based Procedures for Field Workers - FY16 Research Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna; Bly, Aaron
The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven jobmore » aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages – Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to commercialize INL’s CBP system.« less
Ladd Effio, Christopher; Hahn, Tobias; Seiler, Julia; Oelmeier, Stefan A; Asen, Iris; Silberer, Christine; Villain, Louis; Hubbuch, Jürgen
2016-01-15
Recombinant protein-based virus-like particles (VLPs) are steadily gaining in importance as innovative vaccines against cancer and infectious diseases. Multiple VLPs are currently evaluated in clinical phases requiring a straightforward and rational process design. To date, there is no generic platform process available for the purification of VLPs. In order to accelerate and simplify VLP downstream processing, there is a demand for novel development approaches, technologies, and purification tools. Membrane adsorbers have been identified as promising stationary phases for the processing of bionanoparticles due to their large pore sizes. In this work, we present the potential of two strategies for designing VLP processes following the basic tenet of 'quality by design': High-throughput experimentation and process modeling of an anion-exchange membrane capture step. Automated membrane screenings allowed the identification of optimal VLP binding conditions yielding a dynamic binding capacity of 5.7 mg/mL for human B19 parvovirus-like particles derived from Spodoptera frugiperda Sf9 insect cells. A mechanistic approach was implemented for radial ion-exchange membrane chromatography using the lumped-rate model and stoichiometric displacement model for the in silico optimization of a VLP capture step. For the first time, process modeling enabled the in silico design of a selective, robust and scalable process with minimal experimental effort for a complex VLP feedstock. The optimized anion-exchange membrane chromatography process resulted in a protein purity of 81.5%, a DNA clearance of 99.2%, and a VLP recovery of 59%. Copyright © 2015 Elsevier B.V. All rights reserved.
Past, Present, and Future of Minimally Invasive Abdominal Surgery
Antoniou, George A.; Antoniou, Athanasios I.; Granderath, Frank-Alexander
2015-01-01
Laparoscopic surgery has generated a revolution in operative medicine during the past few decades. Although strongly criticized during its early years, minimization of surgical trauma and the benefits of minimization to the patient have been brought to our attention through the efforts and vision of a few pioneers in the recent history of medicine. The German gynecologist Kurt Semm (1927–2003) transformed the use of laparoscopy for diagnostic purposes into a modern therapeutic surgical concept, having performed the first laparoscopic appendectomy, inspiring Erich Mühe and many other surgeons around the world to perform a wide spectrum of procedures by minimally invasive means. Laparoscopic cholecystectomy soon became the gold standard, and various laparoscopic procedures are now preferred over open approaches, in the light of emerging evidence that demonstrates less operative stress, reduced pain, and shorter convalescence. Natural orifice transluminal endoscopic surgery (NOTES) and single-incision laparoscopic surgery (SILS) may be considered further steps toward minimization of surgical trauma, although these methods have not yet been standardized. Laparoscopic surgery with the use of a robotic platform constitutes a promising field of investigation. New technologies are to be considered under the prism of the history of surgery; they seem to be a step toward further minimization of surgical trauma, but not definite therapeutic modalities. Patient safety and medical ethics must be the cornerstone of future investigation and implementation of new techniques. PMID:26508823
Gkigkitzis, Ioannis
2013-01-01
The aim of this report is to provide a mathematical model of the mechanism for making binary fate decisions about cell death or survival, during and after Photodynamic Therapy (PDT) treatment, and to supply the logical design for this decision mechanism as an application of rate distortion theory to the biochemical processing of information by the physical system of a cell. Based on system biology models of the molecular interactions involved in the PDT processes previously established, and regarding a cellular decision-making system as a noisy communication channel, we use rate distortion theory to design a time dependent Blahut-Arimoto algorithm where the input is a stimulus vector composed of the time dependent concentrations of three PDT related cell death signaling molecules and the output is a cell fate decision. The molecular concentrations are determined by a group of rate equations. The basic steps are: initialize the probability of the cell fate decision, compute the conditional probability distribution that minimizes the mutual information between input and output, compute the cell probability of cell fate decision that minimizes the mutual information and repeat the last two steps until the probabilities converge. Advance to the next discrete time point and repeat the process. Based on the model from communication theory described in this work, and assuming that the activation of the death signal processing occurs when any of the molecular stimulants increases higher than a predefined threshold (50% of the maximum concentrations), for 1800s of treatment, the cell undergoes necrosis within the first 30 minutes with probability range 90.0%-99.99% and in the case of repair/survival, it goes through apoptosis within 3-4 hours with probability range 90.00%-99.00%. Although, there is no experimental validation of the model at this moment, it reproduces some patterns of survival ratios of predicted experimental data. Analytical modeling based on cell death signaling molecules has been shown to be an independent and useful tool for prediction of cell surviving response to PDT. The model can be adjusted to provide important insights for cellular response to other treatments such as hyperthermia, and diseases such as neurodegeneration.
Wen, Xu-dong; Wang, Tao; Huang, Zhu; Zhang, Hong-jian; Zhang, Bing-yin; Tang, Li-jun; Liu, Wei-hui
2017-01-01
Hepatolithiasis is the presence of calculi within the intrahepatic bile duct specifically located proximal to the confluence of the left and right hepatic ducts. The ultimate goal of hepatolithiasis treatment is the complete removal of the stone, the correction of the associated strictures and the prevention of recurrent cholangitis. Although hepatectomy could effectively achieve the above goals, it can be restricted by the risk of insufficient residual liver volume, and has a 15.6% rate of residual hepatolithiasis. With improvements in minimally invasive surgery, post-operative cholangioscopy (POC), provides an additional option for hepatolithiasis treatment with higher clearance rate and fewer severe complications. POC is very safe, and can be performed repeatedly until full patient benefit is achieved. During POC three main steps are accomplished: first, the analysis of the residual hepatolithiasis distribution indirectly by imaging methods or directly endoscopic observation; second, the establishment of the surgical pathway to relieve the strictures; and third, the removal of the stone by a combination of different techniques such as simple basket extraction, mechanical fragmentation, electrohydraulic lithotripsy or laser lithotripsy, among others. In summary, a step-by-step strategy of POC should be put forward to standardize the procedures, especially when dealing with complicated residual hepatolithiasis. This review briefly summarizes the classification, management and complications of hepatolithiasis during the POC process. PMID:29147136
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Douglas C.; Hart, Todd R.; Neuenschwander, Gary G.
Through the use of a metal catalyst, gasification of wet algae slurries can be accomplished with high levels of carbon conversion to gas at relatively low temperature (350 C). In a pressurized-water environment (20 MPa), near-total conversion of the organic structure of the algae to gases has been achieved in the presence of a supported ruthenium metal catalyst. The process is essentially steam reforming, as there is no added oxidizer or reagent other than water. In addition, the gas produced is a medium-heating value gas due to the synthesis of high levels of methane, as dictated by thermodynamic equilibrium. Asmore » opposed to earlier work, biomass trace components were removed by processing steps so that they did not cause processing difficulties in the fixed catalyst bed tubular reactor system. As a result, the algae feedstocks, even those with high ash contents, were much more reliably processed. High conversions were obtained even with high slurry concentrations. Consistent catalyst operation in these short-term tests suggested good stability and minimal poisoning effects. High methane content in the product gas was noted with significant carbon dioxide captured in the aqueous byproduct in combination with alkali constituents and the ammonia byproduct derived from proteins in the algae. High conversion of algae to gas products was found with low levels of byproduct water contamination and low to moderate loss of carbon in the mineral separation step.« less
Plasma processing conditions substantially influence circulating microRNA biomarker levels.
Cheng, Heather H; Yi, Hye Son; Kim, Yeonju; Kroh, Evan M; Chien, Jason W; Eaton, Keith D; Goodman, Marc T; Tait, Jonathan F; Tewari, Muneesh; Pritchard, Colin C
2013-01-01
Circulating, cell-free microRNAs (miRNAs) are promising candidate biomarkers, but optimal conditions for processing blood specimens for miRNA measurement remain to be established. Our previous work showed that the majority of plasma miRNAs are likely blood cell-derived. In the course of profiling lung cancer cases versus healthy controls, we observed a broad increase in circulating miRNA levels in cases compared to controls and that higher miRNA expression correlated with higher platelet and particle counts. We therefore hypothesized that the quantity of residual platelets and microparticles remaining after plasma processing might impact miRNA measurements. To systematically investigate this, we subjected matched plasma from healthy individuals to stepwise processing with differential centrifugation and 0.22 µm filtration and performed miRNA profiling. We found a major effect on circulating miRNAs, with the majority (72%) of detectable miRNAs substantially affected by processing alone. Specifically, 10% of miRNAs showed 4-30x variation, 46% showed 30-1,000x variation, and 15% showed >1,000x variation in expression solely from processing. This was predominantly due to platelet contamination, which persisted despite using standard laboratory protocols. Importantly, we show that platelet contamination in archived samples could largely be eliminated by additional centrifugation, even in frozen samples stored for six years. To minimize confounding effects in microRNA biomarker studies, additional steps to limit platelet contamination for circulating miRNA biomarker studies are necessary. We provide specific practical recommendations to help minimize confounding variation attributable to plasma processing and platelet contamination.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
2012-01-01
Abstract Recent progress in stem cell biology, notably cell fate conversion, calls for novel theoretical understanding for cell differentiation. The existing qualitative concept of Waddington’s “epigenetic landscape” has attracted particular attention because it captures subsequent fate decision points, thus manifesting the hierarchical (“tree-like”) nature of cell fate diversification. Here, we generalized a recent work and explored such a developmental landscape for a two-gene fate decision circuit by integrating the underlying probability landscapes with different parameters (corresponding to distinct developmental stages). The change of entropy production rate along the parameter changes indicates which parameter changes can represent a normal developmental process while other parameters’ change can not. The transdifferentiation paths over the landscape under certain conditions reveal the possibility of a direct and reversible phenotypic conversion. As the intensity of noise increases, we found that the landscape becomes flatter and the dominant paths more straight, implying the importance of biological noise processing mechanism in development and reprogramming. We further extended the landscape of the one-step fate decision to that for two-step decisions in central nervous system (CNS) differentiation. A minimal network and dynamic model for CNS differentiation was firstly constructed where two three-gene motifs are coupled. We then implemented the SDEs (Stochastic Differentiation Equations) simulation for the validity of the network and model. By integrating the two landscapes for the two switch gene pairs, we constructed the two-step development landscape for CNS differentiation. Our work provides new insights into cellular differentiation and important clues for better reprogramming strategies. PMID:23300518
Laser-based gluing of diamond-tipped saw blades
NASA Astrophysics Data System (ADS)
Hennigs, Christian; Lahdo, Rabi; Springer, André; Kaierle, Stefan; Hustedt, Michael; Brand, Helmut; Wloka, Richard; Zobel, Frank; Dültgen, Peter
2016-03-01
To process natural stone such as marble or granite, saw blades equipped with wear-resistant diamond grinding segments are used, typically joined to the blade by brazing. In case of damage or wear, they must be exchanged. Due to the large energy input during thermal loosening and subsequent brazing, the repair causes extended heat-affected zones with serious microstructure changes, resulting in shape distortions and disadvantageous stress distributions. Consequently, axial run-out deviations and cutting losses increase. In this work, a new near-infrared laser-based process chain is presented to overcome the deficits of conventional brazing-based repair of diamond-tipped steel saw blades. Thus, additional tensioning and straightening steps can be avoided. The process chain starts with thermal debonding of the worn grinding segments, using a continuous-wave laser to heat the segments gently and to exceed the adhesive's decomposition temperature. Afterwards, short-pulsed laser radiation removes remaining adhesive from the blade in order to achieve clean joining surfaces. The third step is roughening and activation of the joining surfaces, again using short-pulsed laser radiation. Finally, the grinding segments are glued onto the blade with a defined adhesive layer, using continuous-wave laser radiation. Here, the adhesive is heated to its curing temperature by irradiating the respective grinding segment, ensuring minimal thermal influence on the blade. For demonstration, a prototype unit was constructed to perform the different steps of the process chain on-site at the saw-blade user's facilities. This unit was used to re-equip a saw blade with a complete set of grinding segments. This saw blade was used successfully to cut different materials, amongst others granite.
Emery, R J; Sprau, D D; Morecook, R C
2008-11-01
Experience gained during a field training exercise with a Medical Reserve Corps unit on the screening of large groups of individuals for possible contamination with radioactive material revealed that while exercise participants were generally attentive to the proper use of protective equipment and detectors, they tended to overlook important basic risk communications aspects. For example, drill participants did not actively communicate with the persons waiting in line for screening, a step which would provide re-assurance, possibly minimize apprehension, and would clarify expectations. When questioned on this issue of risk communication, drill participants were often able to craft ad hoc messages, but the messages were inconsistent and likely would not have significantly helped diminish anxiety and maintain crowd control. Similar difficulties were encountered regarding messaging for persons determined to be contaminated, those departing the screening center, and those to be delivered to the media. Based on these experiences, the need for a suggested list of risk communication points was identified. To address this need, a set of risk communication templates were developed that focused on the issues likely to be encountered in a mass screening event. The points include issues such as the importance of remaining calm, steps for minimizing possible intake or uptake, considerations for those exhibiting acute injuries, expected screening wait times, the process to be followed and the information to be collected, the process to be undertaken for those exhibiting contamination, and symptoms to watch for after departure. Drill participants indicated in follow-up discussions that such pre-established risk communication templates would serve to enhance their ability to assist in times of emergency and noted the potential broader applicably of the approach for use in responses for other disasters types as well.
Improving image quality in laboratory x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.
2017-03-01
Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.
Smith, Brandon W; Joseph, Jacob R; Kirsch, Michael; Strasser, Mary Oakley; Smith, Jacob; Park, Paul
2017-08-01
OBJECTIVE Percutaneous pedicle screw insertion (PPSI) is a mainstay of minimally invasive spinal surgery. Traditionally, PPSI is a fluoroscopy-guided, multistep process involving traversing the pedicle with a Jamshidi needle, placement of a Kirschner wire (K-wire), placement of a soft-tissue dilator, pedicle tract tapping, and screw insertion over the K-wire. This study evaluates the accuracy and safety of PPSI with a simplified 2-step process using a navigated awl-tap followed by navigated screw insertion without use of a K-wire or fluoroscopy. METHODS Patients undergoing PPSI utilizing the K-wire-less technique were identified. Data were extracted from the electronic medical record. Complications associated with screw placement were recorded. Postoperative radiographs as well as CT were evaluated for accuracy of pedicle screw placement. RESULTS Thirty-six patients (18 male and 18 female) were included. The patients' mean age was 60.4 years (range 23.8-78.4 years), and their mean body mass index was 28.5 kg/m 2 (range 20.8-40.1 kg/m 2 ). A total of 238 pedicle screws were placed. A mean of 6.6 pedicle screws (range 4-14) were placed over a mean of 2.61 levels (range 1-7). No pedicle breaches were identified on review of postoperative radiographs. In a subgroup analysis of the 25 cases (69%) in which CT scans were performed, 173 screws were assessed; 170 (98.3%) were found to be completely within the pedicle, and 3 (1.7%) demonstrated medial breaches of less than 2 mm (Grade B). There were no complications related to PPSI in this cohort. CONCLUSIONS This streamlined 2-step K-wire-less, navigated PPSI appears safe and accurate and avoids the need for radiation exposure to surgeon and staff.
Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection
Regeling, Bianca; Thies, Boris; Gerstner, Andreas O. H.; Westermann, Stephan; Müller, Nina A.; Bendix, Jörg; Laffers, Wiebke
2016-01-01
Hyperspectral imaging (HSI) is increasingly gaining acceptance in the medical field. Up until now, HSI has been used in conjunction with rigid endoscopy to detect cancer in vivo. The logical next step is to pair HSI with flexible endoscopy, since it improves access to hard-to-reach areas. While the flexible endoscope’s fiber optic cables provide the advantage of flexibility, they also introduce an interfering honeycomb-like pattern onto images. Due to the substantial impact this pattern has on locating cancerous tissue, it must be removed before the HS data can be further processed. Thereby, the loss of information is to minimize avoiding the suppression of small-area variations of pixel values. We have developed a system that uses flexible endoscopy to record HS cubes of the larynx and designed a special filtering technique to remove the honeycomb-like pattern with minimal loss of information. We have confirmed its feasibility by comparing it to conventional filtering techniques using an objective metric and by applying unsupervised and supervised classifications to raw and pre-processed HS cubes. Compared to conventional techniques, our method successfully removes the honeycomb-like pattern and considerably improves classification performance, while preserving image details. PMID:27529255
Alcohols inhibit translation to regulate morphogenesis in C. albicans
Egbe, Nkechi E.; Paget, Caroline M.; Wang, Hui; Ashe, Mark P.
2015-01-01
Many molecules are secreted into the growth media by microorganisms to modulate the metabolic and physiological processes of the organism. For instance, alcohols like butanol, ethanol and isoamyl alcohol are produced by the human pathogenic fungus, Candida albicans and induce morphological differentiation. Here we show that these same alcohols cause a rapid inhibition of protein synthesis. More specifically, the alcohols target translation initiation, a complex stage of the gene expression process. Using molecular techniques, we have identified the likely translational target of these alcohols in C. albicans as the eukaryotic translation initiation factor 2B (eIF2B). eIF2B is the guanine nucleotide exchange factor for eIF2, which supports the exchange reaction where eIF2.GDP is converted to eIF2.GTP. Even minimal regulation at this step will lead to alterations in the levels of specific proteins that may allow the exigencies of the fungus to be realised. Indeed, similar to the effects of alcohols, a minimal inhibition of protein synthesis with cycloheximide also causes an induction of filamentous growth. These results suggest a molecular basis for the effect of various alcohols on morphological differentiation in C. albicans. PMID:25843913
Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection.
Regeling, Bianca; Thies, Boris; Gerstner, Andreas O H; Westermann, Stephan; Müller, Nina A; Bendix, Jörg; Laffers, Wiebke
2016-08-13
Hyperspectral imaging (HSI) is increasingly gaining acceptance in the medical field. Up until now, HSI has been used in conjunction with rigid endoscopy to detect cancer in vivo. The logical next step is to pair HSI with flexible endoscopy, since it improves access to hard-to-reach areas. While the flexible endoscope's fiber optic cables provide the advantage of flexibility, they also introduce an interfering honeycomb-like pattern onto images. Due to the substantial impact this pattern has on locating cancerous tissue, it must be removed before the HS data can be further processed. Thereby, the loss of information is to minimize avoiding the suppression of small-area variations of pixel values. We have developed a system that uses flexible endoscopy to record HS cubes of the larynx and designed a special filtering technique to remove the honeycomb-like pattern with minimal loss of information. We have confirmed its feasibility by comparing it to conventional filtering techniques using an objective metric and by applying unsupervised and supervised classifications to raw and pre-processed HS cubes. Compared to conventional techniques, our method successfully removes the honeycomb-like pattern and considerably improves classification performance, while preserving image details.
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Niu, Dan; Zhao, Gang; Liu, Xiaoli; Zhou, Ping; Cao, Yunxia
2016-03-01
High-survival-rate cryopreservation of endothelial cells plays a critical role in vascular tissue engineering, while optimization of osmotic injuries is the first step toward successful cryopreservation. We designed a low-cost, easy-to-use, microfluidics-based microperfusion chamber to investigate the osmotic responses of human umbilical vein endothelial cells (HUVECs) at different temperatures, and then optimized the protocols for using cryoprotective agents (CPAs) to minimize osmotic injuries and improve processes before freezing and after thawing. The fundamental cryobiological parameters were measured using the microperfusion chamber, and then, the optimized protocols using these parameters were confirmed by survival evaluation and cell proliferation experiments. It was revealed for the first time that HUVECs have an unusually small permeability coefficient for Me2SO. Even at the concentrations well established for slow freezing of cells (1.5 M), one-step removal of CPAs for HUVECs might result in inevitable osmotic injuries, indicating that multiple-step removal is essential. Further experiments revealed that multistep removal of 1.5 M Me2SO at 25°C was the best protocol investigated, in good agreement with theory. These results should prove invaluable for optimization of cryopreservation protocols of HUVECs.
NASA Astrophysics Data System (ADS)
Steinberg, M.; Dong, Yuanji
1993-10-01
The Hynol process is proposed to meet the demand for an economical process for methanol production with reduced CO2 emission. This new process consists of three reaction steps: (1) hydrogasification of biomass, (2) steam reforming of the produced gas with additional natural gas feedstock, and (3) methanol synthesis of the hydrogen and carbon monoxide produced during the previous two steps. The H2-rich gas remaining after methanol synthesis is recycled to gasify the biomass in an energy neutral reactor so that there is no need for an expensive oxygen plant as required by commercial steam gasifiers. Recycling gas allows the methanol synthesis reactor to perform at a relatively lower pressure than conventional while the plant still maintains high methanol yield. Energy recovery designed into the process minimizes heat loss and increases the process thermal efficiency. If the Hynol methanol is used as an alternative and more efficient automotive fuel, an overall 41% reduction in CO2 emission can be achieved compared to the use of conventional gasoline fuel. A preliminary economic estimate shows that the total capital investment for a Hynol plant is 40% lower than that for a conventional biomass gasification plant. The methanol production cost is $0.43/gal for a 1085 million gal/yr Hynol plant which is competitive with current U.S. methanol and equivalent gasoline prices. Process flowsheet and simulation data using biomass and natural gas as cofeedstocks are presented. The Hynol process can convert any condensed carbonaceous material, especially municipal solid waste (MSW), to produce methanol.
NASA Technical Reports Server (NTRS)
Vosteen, Louis F.; Hadcock, Richard N.
1994-01-01
A study of past composite aircraft structures programs was conducted to determine the lessons learned during the programs. The study focused on finding major underlying principles and practices that experience showed have significant effects on the development process and should be recognized and understood by those responsible for using of composites. Published information on programs was reviewed and interviews were conducted with personnel associated with current and past major development programs. In all, interviews were conducted with about 56 people representing 32 organizations. Most of the people interviewed have been involved in the engineering and manufacturing development of composites for the past 20 to 25 years. Although composites technology has made great advances over the past 30 years, the effective application of composites to aircraft is still a complex problem that requires experienced personnel with special knowledge. All disciplines involved in the development process must work together in real time to minimize risk and assure total product quality and performance at acceptable costs. The most successful programs have made effective use of integrated, collocated, concurrent engineering teams, and most often used well-planned, systematic development efforts wherein the design and manufacturing processes are validated in a step-by-step or 'building block' approach. Such approaches reduce program risk and are cost effective.
Urate Oxidase Purification by Salting-in Crystallization: Towards an Alternative to Chromatography
Giffard, Marion; Ferté, Natalie; Ragot, François; El Hajji, Mohamed; Castro, Bertrand; Bonneté, Françoise
2011-01-01
Background Rasburicase (Fasturtec® or Elitek®, Sanofi-Aventis), the recombinant form of urate oxidase from Aspergillus flavus, is a therapeutic enzyme used to prevent or decrease the high levels of uric acid in blood that can occur as a result of chemotherapy. It is produced by Sanofi-Aventis and currently purified via several standard steps of chromatography. This work explores the feasibility of replacing one or more chromatography steps in the downstream process by a crystallization step. It compares the efficacy of two crystallization techniques that have proven successful on pure urate oxidase, testing them on impure urate oxidase solutions. Methodology/Principal Findings Here we investigate the possibility of purifying urate oxidase directly by crystallization from the fermentation broth. Based on attractive interaction potentials which are known to drive urate oxidase crystallization, two crystallization routes are compared: a) by increased polymer concentration, which induces a depletion attraction and b) by decreased salt concentration, which induces attractive interactions via a salting-in effect. We observe that adding polymer, a very efficient way to crystallize pure urate oxidase through the depletion effect, is not an efficient way to grow crystals from impure solution. On the other hand, we show that dialysis, which decreases salt concentration through its strong salting-in effect, makes purification of urate oxidase from the fermentation broth possible. Conclusions The aim of this study is to compare purification efficacy of two crystallization methods. Our findings show that crystallization of urate oxidase from the fermentation broth provides purity comparable to what can be achieved with one chromatography step. This suggests that, in the case of urate oxidase, crystallization could be implemented not only for polishing or concentration during the last steps of purification, but also as an initial capture step, with minimal changes to the current process. PMID:21589929
Automated and unsupervised detection of malarial parasites in microscopic images.
Purwar, Yashasvi; Shah, Sirish L; Clarke, Gwen; Almugairi, Areej; Muehlenbachs, Atis
2011-12-13
Malaria is a serious infectious disease. According to the World Health Organization, it is responsible for nearly one million deaths each year. There are various techniques to diagnose malaria of which manual microscopy is considered to be the gold standard. However due to the number of steps required in manual assessment, this diagnostic method is time consuming (leading to late diagnosis) and prone to human error (leading to erroneous diagnosis), even in experienced hands. The focus of this study is to develop a robust, unsupervised and sensitive malaria screening technique with low material cost and one that has an advantage over other techniques in that it minimizes human reliance and is, therefore, more consistent in applying diagnostic criteria. A method based on digital image processing of Giemsa-stained thin smear image is developed to facilitate the diagnostic process. The diagnosis procedure is divided into two parts; enumeration and identification. The image-based method presented here is designed to automate the process of enumeration and identification; with the main advantage being its ability to carry out the diagnosis in an unsupervised manner and yet have high sensitivity and thus reducing cases of false negatives. The image based method is tested over more than 500 images from two independent laboratories. The aim is to distinguish between positive and negative cases of malaria using thin smear blood slide images. Due to the unsupervised nature of method it requires minimal human intervention thus speeding up the whole process of diagnosis. Overall sensitivity to capture cases of malaria is 100% and specificity ranges from 50-88% for all species of malaria parasites. Image based screening method will speed up the whole process of diagnosis and is more advantageous over laboratory procedures that are prone to errors and where pathological expertise is minimal. Further this method provides a consistent and robust way of generating the parasite clearance curves.
Effects of Topography-based Subgrid Structures on Land Surface Modeling
NASA Astrophysics Data System (ADS)
Tesfa, T. K.; Ruby, L.; Brunke, M.; Thornton, P. E.; Zeng, X.; Ghan, S. J.
2017-12-01
Topography has major control on land surface processes through its influence on atmospheric forcing, soil and vegetation properties, network topology and drainage area. Consequently, accurate climate and land surface simulations in mountainous regions cannot be achieved without considering the effects of topographic spatial heterogeneity. To test a computationally less expensive hyper-resolution land surface modeling approach, we developed topography-based landunits within a hierarchical subgrid spatial structure to improve representation of land surface processes in the ACME Land Model (ALM) with minimal increase in computational demand, while improving the ability to capture the spatial heterogeneity of atmospheric forcing and land cover influenced by topography. This study focuses on evaluation of the impacts of the new spatial structures on modeling land surface processes. As a first step, we compare ALM simulations with and without subgrid topography and driven by grid cell mean atmospheric forcing to isolate the impacts of the subgrid topography on the simulated land surface states and fluxes. Recognizing that subgrid topography also has important effects on atmospheric processes that control temperature, radiation, and precipitation, methods are being developed to downscale atmospheric forcings. Hence in the second step, the impacts of the subgrid topographic structure on land surface modeling will be evaluated by including spatial downscaling of the atmospheric forcings. Preliminary results on the atmospheric downscaling and the effects of the new spatial structures on the ALM simulations will be presented.
von Kodolitsch, Yskert; Bernhardt, Alexander M.; Robinson, Peter N.; Kölbel, Tilo; Reichenspurner, Hermann; Debus, Sebastian; Detter, Christian
2015-01-01
Background It is the physicians’ task to translate evidence and guidelines into medical strategies for individual patients. Until today, however, there is no formal tool that is instrumental to perform this translation. Methods We introduce the analysis of strengths (S) and weaknesses (W) related to therapy with opportunities (O) and threats (T) related to individual patients as a tool to establish an individualized (I) medical strategy (I-SWOT). The I-SWOT matrix identifies four fundamental types of strategy. These comprise “SO” maximizing strengths and opportunities, “WT” minimizing weaknesses and threats, “WO” minimizing weaknesses and maximizing opportunities, and “ST” maximizing strengths and minimizing threats. Each distinct type of strategy may be considered for individualized medical strategies. Results We describe four steps of I-SWOT to establish an individualized medical strategy to treat aortic disease. In the first step, we define the goal of therapy and identify all evidence-based therapeutic options. In a second step, we assess strengths and weaknesses of each therapeutic option in a SW matrix form. In a third step, we assess opportunities and threats related to the individual patient, and in a final step, we use the I-SWOT matrix to establish an individualized medical strategy through matching “SW” with “OT”. As an example we present two 30-year-old patients with Marfan syndrome with identical medical history and aortic pathology. As a result of I-SWOT analysis of their individual opportunities and threats, we identified two distinct medical strategies in these patients. Conclusion I-SWOT is a formal but easy to use tool to translate medical evidence into individualized medical strategies. PMID:27069939
von Kodolitsch, Yskert; Bernhardt, Alexander M; Robinson, Peter N; Kölbel, Tilo; Reichenspurner, Hermann; Debus, Sebastian; Detter, Christian
2015-06-01
It is the physicians' task to translate evidence and guidelines into medical strategies for individual patients. Until today, however, there is no formal tool that is instrumental to perform this translation. We introduce the analysis of strengths (S) and weaknesses (W) related to therapy with opportunities (O) and threats (T) related to individual patients as a tool to establish an individualized (I) medical strategy (I-SWOT). The I-SWOT matrix identifies four fundamental types of strategy. These comprise "SO" maximizing strengths and opportunities, "WT" minimizing weaknesses and threats, "WO" minimizing weaknesses and maximizing opportunities, and "ST" maximizing strengths and minimizing threats. Each distinct type of strategy may be considered for individualized medical strategies. We describe four steps of I-SWOT to establish an individualized medical strategy to treat aortic disease. In the first step, we define the goal of therapy and identify all evidence-based therapeutic options. In a second step, we assess strengths and weaknesses of each therapeutic option in a SW matrix form. In a third step, we assess opportunities and threats related to the individual patient, and in a final step, we use the I-SWOT matrix to establish an individualized medical strategy through matching "SW" with "OT". As an example we present two 30-year-old patients with Marfan syndrome with identical medical history and aortic pathology. As a result of I-SWOT analysis of their individual opportunities and threats, we identified two distinct medical strategies in these patients. I-SWOT is a formal but easy to use tool to translate medical evidence into individualized medical strategies.
Kim, Seyoung; Park, Sukyung
2012-01-10
Humans use equal push-off and heel strike work during the double support phase to minimize the mechanical work done on the center of mass (CoM) during the gait. Recently, a step-to-step transition was reported to occur over a period of time greater than that of the double support phase, which brings into question whether the energetic optimality is sensitive to the definition of the step-to-step transition. To answer this question, the ground reaction forces (GRFs) of seven normal human subjects walking at four different speeds (1.1-2.4 m/s) were measured, and the push-off and heel strike work for three differently defined step-to-step transitions were computed based on the force, work, and velocity. To examine the optimality of the work and the impulse data, a hybrid theoretical-empirical analysis is presented using a dynamic walking model that allows finite time for step-to-step transitions and incorporates the effects of gravity within this period. The changes in the work and impulse were examined parametrically across a range of speeds. The results showed that the push-off work on the CoM was well balanced by the heel strike work for all three definitions of the step-to-step transition. The impulse data were well matched by the optimal impulse predictions (R(2)>0.7) that minimized the mechanical work done on the CoM during the gait. The results suggest that the balance of push-off and heel strike energy is a consistent property arising from the overall gait dynamics, which implies an inherited oscillatory behavior of the CoM, possibly by spring-like leg mechanics. Copyright © 2011 Elsevier Ltd. All rights reserved.
Apparatus and processes for the mass production of photovoltaic modules
Barth, Kurt L [Ft. Collins, CO; Enzenroth, Robert A [Fort Collins, CO; Sampath, Walajabad S [Fort Collins, CO
2007-05-22
An apparatus and processes for large scale inline manufacturing of CdTe photovoltaic modules in which all steps, including rapid substrate heating, deposition of CdS, deposition of CdTe, CdCl.sub.2 treatment, and ohmic contact formation, are performed within a single vacuum boundary at modest vacuum pressures. A p+ ohmic contact region is formed by subliming a metal salt onto the CdTe layer. A back electrode is formed by way of a low cost spray process, and module scribing is performed by means of abrasive blasting or mechanical brushing through a mask. The vacuum process apparatus facilitates selective heating of substrates and films, exposure of substrates and films to vapor with minimal vapor leakage, deposition of thin films onto a substrate, and stripping thin films from a substrate. A substrate transport apparatus permits the movement of substrates into and out of vacuum during the thin film deposition processes, while preventing the collection of coatings on the substrate transport apparatus itself.
Apparatus and processes for the mass production of photovotaic modules
Barth, Kurt L.; Enzenroth, Robert A.; Sampath, Walajabad S.
2002-07-23
An apparatus and processes for large scale inline manufacturing of CdTe photovoltaic modules in which all steps, including rapid substrate heating, deposition of CdS, deposition of CdTe, CdCl.sub.2 treatment, and ohmic contact formation, are performed within a single vacuum boundary at modest vacuum pressures. A p+ ohmic contact region is formed by subliming a metal salt onto the CdTe layer. A back electrode is formed by way of a low cost spray process, and module scribing is performed by means of abrasive blasting or mechanical brushing through a mask. The vacuum process apparatus facilitates selective heating of substrates and films, exposure of substrates and films to vapor with minimal vapor leakage, deposition of thin films onto a substrate, and stripping thin films from a substrate. A substrate transport apparatus permits the movement of substrates into and out of vacuum during the thin film deposition processes, while preventing the collection of coatings on the substrate transport apparatus itself.
Milner, Phillip J.; Martell, Jeffrey D.; Siegelman, Rebecca L.; Gygi, David; Weston, Simon C.
2017-01-01
Alkyldiamine-functionalized variants of the metal–organic framework Mg2(dobpdc) (dobpdc4– = 4,4′-dioxidobiphenyl-3,3′-dicarboxylate) are promising for CO2 capture applications owing to their unique step-shaped CO2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary,secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO2 adsorption/desorption profiles. This two-step behavior likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg2(dobpdc) and leads to decreased CO2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg2(dotpdc) (dotpdc4– = 4,4′′-dioxido-[1,1′:4′,1′′-terphenyl]-3,3′′-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg2(pc-dobpdc) (pc-dobpdc4– = 3,3′-dioxidobiphenyl-4,4′-dicarboxylate, pc = para-carboxylate), which, in contrast to Mg2(dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg2(pc-dobpdc) with large diamines such as N-(n-heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications. PMID:29629084
Medicare+Choice: what lies ahead?
Layne, R Jeffrey
2002-03-01
Health plans have continued to exit the Medicare+Choice program in recent years, despite efforts of Congress and the Centers for Medicare and Medicaid Services (CMS) to reform the program. Congress and CMS therefore stand poised to make additional, substantial reforms to the program. CMS has proposed to consolidate its oversight of the program, extend the due date for Medicare+Choice plans to file their adjusted community rate proposals, revise risk-adjustment processes, streamline the marketing review process, enhance quality-improvement requirements, institute results based performance assessment audits, coordinate policy changes to coincide with contracting cycles, expand its fall advertising campaign for the program, provide better employer-based Medicare options for beneficiaries, and take steps to minimize beneficiary costs. Congressional leaders have proposed various legislative remedies to improve the program, including creation of an entirely new pricing structure for the program based on a competitive bidding process.
Investigations for the Recycle of Pyroprocessed Uranium
NASA Astrophysics Data System (ADS)
Westphal, B. R.; Price, J. C.; Chambers, E. E.; Patterson, M. N.
Given the renewed interest in uranium from the pyroprocessing of used nuclear fuel in a molten salt system, the two biggest hurdles for marketing the uranium are radiation levels and transuranic content. A radiation level as low as possible is desired so that handling operations can be performed directly with the uranium. The transuranic content of the uranium will affect the subsequent waste streams generated and, thus also should be minimized. Although the pyroprocessing technology was originally developed without regard to radiation and transuranic levels, adaptations to the process have been considered. Process conditions have been varied during the distillation and casting cycles of the process with increasing temperature showing the largest effect on the reduction of radiation levels. Transuranic levels can be reduced significantly by incorporating a pre-step in the salt distillation operation to remove a majority of the salt prior to distillation.
Fabrication of large area Si cylindric drift detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, W.; Kraner, H.W.; Li, Z.
1993-04-01
Advanced Si drift detector, a large area cylindrical drift detector (CDD), processing steps, with the exception of the ion implantation, were carried out in the BNL class 100 cleanroom. The double-side planer process technique was developed for the fabrication of CDD. Important improvements of the double-side planer process in this fabrication are the introduction of Al implantation protection mask and the remaining of a 1000 Angstroms oxide layer in the p-window during the implantation. Another important design of the CDD is the structure called ``river,`` which ,allows the current generated on Si-SiO{sub 2} interface to ``flow`` into the guard anode,more » and thus can minimize the leakage current at the signed anode. The test result showed that most of the signal anodes have the leakage current about 0.3 nA/cm{sup 2} for the best detector.« less
Single-step treatment of 2,4-dinitrotoluene via zero-valent metal reduction and chemical oxidation.
Thomas, J Mathew; Hernandez, Rafael; Kuo, Chiang-Hai
2008-06-30
Many nitroaromatic compounds (NACs) are considered toxic and potential carcinogens. The purpose of this study was to develop an integrated reductive/oxidative process for treating NACs contaminated waters. The process consists of the combination of zero-valent iron and an ozonation based treatment technique. Corrosion promoters are added to the contaminated water to minimize passivation of the metallic species. Water contaminated with 2,4-dinitrotoluene (DNT) was treated with the integrated process using a recirculated batch reactor. It was demonstrated that addition of corrosion promoters to the contaminated water enhances the reduction of 2,4-DNT with zero-valent iron. The addition of corrosion promoters resulted in 62% decrease in 2,4-DNT concentration to 2,4-diaminotoluene. The data shows that iron reduced the 2,4-DNT and ozone oxidized these products resulting in a 73% removal of TOC and a 96% decrease in 2,4-DNT concentration.
Jönsson, Leif J; Martín, Carlos
2016-01-01
Biochemical conversion of lignocellulosic feedstocks to advanced biofuels and other commodities through a sugar-platform process involves a pretreatment step enhancing the susceptibility of the cellulose to enzymatic hydrolysis. A side effect of pretreatment is formation of lignocellulose-derived by-products that inhibit microbial and enzymatic biocatalysts. This review provides an overview of the formation of inhibitory by-products from lignocellulosic feedstocks as a consequence of using different pretreatment methods and feedstocks as well as an overview of different strategies used to alleviate problems with inhibitors. As technologies for biorefining of lignocellulose become mature and are transferred from laboratory environments to industrial contexts, the importance of management of inhibition problems is envisaged to increase as issues that become increasingly relevant will include the possibility to use recalcitrant feedstocks, obtaining high product yields and high productivity, minimizing the charges of enzymes and microorganisms, and using high solids loadings to obtain high product titers. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Translating Big Data into Smart Data for Veterinary Epidemiology.
VanderWaal, Kimberly; Morrison, Robert B; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M
2017-01-01
The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing "big" data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having "big data" to create "smart data," with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.
NASA Astrophysics Data System (ADS)
Labin, Amichai M.; Safuri, Shadi K.; Ribak, Erez N.; Perlman, Ido
2014-07-01
Vision starts with the absorption of light by the retinal photoreceptors—cones and rods. However, due to the ‘inverted’ structure of the retina, the incident light must propagate through reflecting and scattering cellular layers before reaching the photoreceptors. It has been recently suggested that Müller cells function as optical fibres in the retina, transferring light illuminating the retinal surface onto the cone photoreceptors. Here we show that Müller cells are wavelength-dependent wave-guides, concentrating the green-red part of the visible spectrum onto cones and allowing the blue-purple part to leak onto nearby rods. This phenomenon is observed in the isolated retina and explained by a computational model, for the guinea pig and the human parafoveal retina. Therefore, light propagation by Müller cells through the retina can be considered as an integral part of the first step in the visual process, increasing photon absorption by cones while minimally affecting rod-mediated vision.
a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud
NASA Astrophysics Data System (ADS)
Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.
2018-04-01
Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.
Parotitis and Sialendoscopy of the Parotid Gland.
Hernandez, Stephen; Busso, Carlos; Walvekar, Rohan R
2016-04-01
Nonneoplastic disorders of the salivary glands involve inflammatory processes. These disorders have been managed conservatively with antibiotics, warm compresses, massage, sialogogues, and adequate hydration. Up to 40% of patients may have an inadequate response or persistent symptoms. When conservative techniques fail, the next step is operative intervention. Sialendoscopy offers a minimally invasive option for the diagnosis and management of chronic inflammatory disorders of the salivary glands and offers the option of gland and function preservation. In this article, we review some of the more common nonneoplastic disorders of the parotid gland, indications for diagnostic and interventional sialendoscopy, and operative techniques. Copyright © 2016 Elsevier Inc. All rights reserved.
Synthesis of Platinum-nickel Nanowires and Optimization for Oxygen Reduction Performance
Alia, Shaun M.; Pivovar, Bryan S.
2018-01-01
Platinum-nickel (Pt-Ni) nanowires were developed as fuel cell electrocatalysts, and were optimized for the performance and durability in the oxygen reduction reaction. Spontaneous galvanic displacement was used to deposit Pt layers onto Ni nanowire substrates. The synthesis approach produced catalysts with high specific activities and high Pt surface areas. Hydrogen annealing improved Pt and Ni mixing and specific activity. Acid leaching was used to preferentially remove Ni near the nanowire surface, and oxygen annealing was used to stabilize near-surface Ni, improving durability and minimizing Ni dissolution. These protocols detail the optimization of each post-synthesis processing step, including hydrogen annealing tomore » 250 degrees C, exposure to 0.1 M nitric acid, and oxygen annealing to 175 degrees C. Through these steps, Pt-Ni nanowires produced increased activities more than an order of magnitude than Pt nanoparticles, while offering significant durability improvements. The presented protocols are based on Pt-Ni systems in the development of fuel cell catalysts. Furthermore, these techniques have also been used for a variety of metal combinations, and can be applied to develop catalysts for a number of electrochemical processes.« less
NASA Astrophysics Data System (ADS)
Nadolny, K.; Kapłonek, W.
2014-08-01
The following work is an analysis of flatness deviations of a workpiece made of X2CrNiMo17-12-2 austenitic stainless steel. The workpiece surface was shaped using efficient machining techniques (milling, grinding, and smoothing). After the machining was completed, all surfaces underwent stylus measurements in order to obtain surface flatness and roughness parameters. For this purpose the stylus profilometer Hommel-Tester T8000 by Hommelwerke with HommelMap software was used. The research results are presented in the form of 2D surface maps, 3D surface topographies with extracted single profiles, Abbott-Firestone curves, and graphical studies of the Sk parameters. The results of these experimental tests proved the possibility of a correlation between flatness and roughness parameters, as well as enabled an analysis of changes in these parameters from shaping and rough grinding to finished machining. The main novelty of this paper is comprehensive analysis of measurement results obtained during a three-step machining process of austenitic stainless steel. Simultaneous analysis of individual machining steps (milling, grinding, and smoothing) enabled a complementary assessment of the process of shaping the workpiece surface macro- and micro-geometry, giving special consideration to minimize the flatness deviations
Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin
2018-03-01
In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.
The Energy Landscape, Folding Pathways and the Kinetics of a Knotted Protein
Prentiss, Michael C.; Wales, David J.; Wolynes, Peter G.
2010-01-01
The folding pathway and rate coefficients of the folding of a knotted protein are calculated for a potential energy function with minimal energetic frustration. A kinetic transition network is constructed using the discrete path sampling approach, and the resulting potential energy surface is visualized by constructing disconnectivity graphs. Owing to topological constraints, the low-lying portion of the landscape consists of three distinct regions, corresponding to the native knotted state and to configurations where either the N or C terminus is not yet folded into the knot. The fastest folding pathways from denatured states exhibit early formation of the N terminus portion of the knot and a rate-determining step where the C terminus is incorporated. The low-lying minima with the N terminus knotted and the C terminus free therefore constitute an off-pathway intermediate for this model. The insertion of both the N and C termini into the knot occurs late in the folding process, creating large energy barriers that are the rate limiting steps in the folding process. When compared to other protein folding proteins of a similar length, this system folds over six orders of magnitude more slowly. PMID:20617197
Synthesis of Platinum-nickel Nanowires and Optimization for Oxygen Reduction Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alia, Shaun M.; Pivovar, Bryan S.
Platinum-nickel (Pt-Ni) nanowires were developed as fuel cell electrocatalysts, and were optimized for the performance and durability in the oxygen reduction reaction. Spontaneous galvanic displacement was used to deposit Pt layers onto Ni nanowire substrates. The synthesis approach produced catalysts with high specific activities and high Pt surface areas. Hydrogen annealing improved Pt and Ni mixing and specific activity. Acid leaching was used to preferentially remove Ni near the nanowire surface, and oxygen annealing was used to stabilize near-surface Ni, improving durability and minimizing Ni dissolution. These protocols detail the optimization of each post-synthesis processing step, including hydrogen annealing tomore » 250 degrees C, exposure to 0.1 M nitric acid, and oxygen annealing to 175 degrees C. Through these steps, Pt-Ni nanowires produced increased activities more than an order of magnitude than Pt nanoparticles, while offering significant durability improvements. The presented protocols are based on Pt-Ni systems in the development of fuel cell catalysts. Furthermore, these techniques have also been used for a variety of metal combinations, and can be applied to develop catalysts for a number of electrochemical processes.« less
Preferred color correction for digital LCD TVs
NASA Astrophysics Data System (ADS)
Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.
The detailed measurement of foot clearance by young adults during stair descent.
Telonio, A; Blanchet, S; Maganaris, C N; Baltzopoulos, V; McFadyen, B J
2013-04-26
Foot clearance is an important variable for understanding safe stair negotiation, but few studies have provided detailed measures of it. This paper presents a new method to calculate minimal shoe clearance during stair descent and compares it to previous literature. Seventeen healthy young subjects descended a five step staircase with step treads of 300 mm and step heights of 188 mm. Kinematic data were collected with an Optotrak system (model 3020) and three non-colinear infrared markers on the feet. Ninety points were digitized on the foot sole prior to data collection using a 6 marker probe and related to the triad of markers on the foot. The foot sole was reconstructed using the Matlab (version 7.0) "meshgrid" function and minimal distance to each step edge was calculated for the heel, toe and foot sole. Results showed significant differences in minimum clearance between sole, heel and toe, with the shoe sole being the closest and the toe the furthest. While the hind foot sole was closest for 69% of the time, the actual minimum clearance point on the sole did vary across subjects and staircase steps. This new method, and the findings on healthy young subjects, can be applied to future studies of other populations and staircase dimensions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Trash-to-Gas: Using Waste Products to Minimize Logistical Mass During Long Duration Space Missions
NASA Technical Reports Server (NTRS)
Hintze, Paul E.; Caraccio, A. J.; Anthony, S. M.; Tsoras, A. N.; Devor, Robert; Captain, James G.; Nur, Mononita
2013-01-01
Just as waste-to-energy processes utilizing municipal landftll and biomass wastes are finding increased terrestrial uses, the Trash-to-Gas (TtG) project seeks to convert waste generated during spaceflight into high value commodities. These include methane for propulsion and water for life support in addition to a variety of other gasses. TtG is part of the Logistic Reduction and Repurposing (LRR) project under the NASA Advanced Exploration Systems Program. The LRR project will enable a largely mission-independent approach to minimize logistics contributions to total mission architecture mass. LRR includes technologies that reduce the amount of consumables that need to be sent to space, repurpose items sent to space, or convert wastes to commodities. Currently, waste generated on the International Space Station is stored inside a logistic module which is de-orbited into Earth's atmosphere for destruction. The waste consists of food packaging, food, clothing and other items. This paper will discuss current results on incineration as a waste processing method. Incineration is part of a two step process to produce methane from waste: first the waste is converted to carbon oxides; second, the carbon oxides are fed to a Sabatier reactor where they are converted to methane. The quantities of carbon dioxide, carbon monoxide, methane and water were measured under the different thermal degradation conditions. The overall carbon conversion efficiency and water recovery are discussed
Trash-to-Gas: Using Waste Products to Minimize Logistical Mass During Long Duration Space Missions
NASA Technical Reports Server (NTRS)
Hintze, Paul. E.; Caraccio, Anne J.; Anthony, Stephen M.; Tsoras, Alexandra N.; Nur, Monoita; Devor, Robert; Captain, James G.
2013-01-01
Just as waste-to-energy processes utilizing municipal landftll and biomass wastes are finding increased terrestrial uses, the Trash-to-Gas (TtG) project seeks to convert waste generated during spaceflight into high value commodities. These include methane for propulsion and water for life support in addition to a variety of other gasses. TtG is part of the Logistic Reduction and Repurposing (LRR) project under the NASA Advanced Exploration Systems Program. The LRR project will enable a largely mission-independent approach to minimize logistics contributions to total mission architecture mass. LRR includes technologies that reduce the amount of consumables that need to be sent to space, repurpose items sent to space, or convert wastes to commodities. Currently, waste generated on the International Space Station is stored inside a logistic module which is de-orbited into Earth's atmosphere for destruction. The waste consists of food packaging, food, clothing and other items. This paper will discuss current results on incineration as a waste processing method. Incineration is part of a two step process to produce methane from waste: first the waste is converted to carbon oxides; second, the carbon oxides are fed to a Sabatier reactor where they are converted to methane. The quantities of carbon dioxide, carbon monoxide, methane and water were measured under the different thermal degradation conditions. The overall carbon conversion efficiency and water recovery are discussed.
Nonlinear Response of Layer Growth Dynamics in the Mixed Kinetics-Bulk-Transport Regime
NASA Technical Reports Server (NTRS)
Vekilov, Peter G.; Alexander, J. Iwan D.; Rosenberger, Franz
1996-01-01
In situ high-resolution interferometry on horizontal facets of the protein lysozyme reveal that the local growth rate R, vicinal slope p, and tangential (step) velocity v fluctuate by up to 80% of their average values. The time scale of these fluctuations, which occur under steady bulk transport conditions through the formation and decay of step bunches (macrosteps), is of the order of 10 min. The fluctuation amplitude of R increases with growth rate (supersaturation) and crystal size, while the amplitude of the v and p fluctuations changes relatively little. Based on a stability analysis for equidistant step trains in the mixed transport-interface-kinetics regime, we argue that the fluctuations originate from the coupling of bulk transport with nonlinear interface kinetics. Furthermore, step bunches moving across the interface in the direction of or opposite to the buoyancy-driven convective flow increase or decrease in height, respectively. This is in agreement with analytical treatments of the interaction of moving steps with solution flow. Major excursions in growth rate are associated with the formation of lattice defects (striations). We show that, in general, the system-dependent kinetic Peclet number, Pe(sub k) , i.e., the relative weight of bulk transport and interface kinetics in the control of the growth process, governs the step bunching dynamics. Since Pe(sub k) can be modified by either forced solution flow or suppression of buoyancy-driven convection under reduced gravity, this model provides a rationale for the choice of specific transport conditions to minimize the formation of compositional inhomogeneities under steady bulk nutrient crystallization conditions.
Description of bioremediation of soils using the model of a multistep system of microorganisms
NASA Astrophysics Data System (ADS)
Lubysheva, A. I.; Potashev, K. A.; Sofinskaya, O. A.
2018-01-01
The paper deals with the development of a mathematical model describing the interaction of a multi-step system of microorganisms in soil polluted with oil products. Each step in this system uses products of vital activity of the previous step to feed. Six different models of the multi-step system are considered. The equipping of the models with coefficients was carried out from the condition of minimizing the residual of the calculated and experimental data using an original algorithm based on the Levenberg-Marquardt method in combination with the Monte Carlo method for the initial approximation finding.
Users guide to E859 phoswich analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costales, J.B.
1992-11-30
In this memo the authors describe the analysis path used to transform the phoswich data from raw data banks into cross sections suitable for publication. The primary purpose of this memo is not to document each analysis step in great detail but rather to point the reader to the fortran code used and to point out the essential features of the analysis path. A flow chart which summarizes the various steps performed to massage the data from beginning to end is given. In general, each step corresponds to a fortran program which was written to perform that particular task. Themore » automation of the data analysis has been kept purposefully minimal in order to ensure the highest quality of the final product. However, tools have been developed which ease the non--automated steps. There are two major parallel routes for the data analysis: data reduction and acceptance determination using detailed GEANT Monte Carlo simulations. In this memo, the authors will first describe the data reduction up to the point where PHAD banks (Pass 1-like banks) are created. They the will describe the steps taken in the GEANT Monte Carlo route. Note that a detailed memo describing the methodology of the acceptance corrections has already been written. Therefore the discussion of the acceptance determination will be kept to a minimum and the reader will be referred to the other memo for further details. Finally, they will describe the cross section formation process and how final spectra are extracted.« less
Energy minimization for self-organized structure formation and actuation
NASA Astrophysics Data System (ADS)
Kofod, Guggi; Wirges, Werner; Paajanen, Mika; Bauer, Siegfried
2007-02-01
An approach for creating complex structures with embedded actuation in planar manufacturing steps is presented. Self-organization and energy minimization are central to this approach, illustrated with a model based on minimization of the hyperelastic free energy strain function of a stretched elastomer and the bending elastic energy of a plastic frame. A tulip-shaped gripper structure illustrates the technological potential of the approach. Advantages are simplicity of manufacture, complexity of final structures, and the ease with which any electroactive material can be exploited as means of actuation.
NASA Astrophysics Data System (ADS)
Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos
2005-07-01
A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.
Minimal Power Latch for Single-Slope ADCs
NASA Technical Reports Server (NTRS)
Hancock, Bruce R. (Inventor)
2015-01-01
A latch circuit that uses two interoperating latches. The latch circuit has the beneficial feature that it switches only a single time during a measurement that uses a stair step or ramp function as an input signal in an analog to digital converter. This feature minimizes the amount of power that is consumed in the latch and also minimizes the amount of high frequency noise that is generated by the latch. An application using a plurality of such latch circuits in a parallel decoding ADC for use in an image sensor is given as an example.
van de Vis, J W; Poelman, M; Lambooij, E; Bégout, M-L; Pilarczyk, M
2012-02-01
The objective was to take a first step in the development of a process-oriented quality assurance (QA) system for monitoring and safeguarding of fish welfare at a company level. A process-oriented approach is focused on preventing hazards and involves establishment of critical steps in a process that requires careful control. The seven principles of the Hazard Analysis Critical Control Points (HACCP) concept were used as a framework to establish the QA system. HACCP is an internationally agreed approach for management of food safety, which was adapted for the purpose of safeguarding and monitoring the welfare of farmed fish. As the main focus of this QA system is farmed fish welfare assurance at a company level, it was named Fish Welfare Assurance System (FWAS). In this paper we present the initial steps of setting up FWAS for on growing of sea bass (Dicentrarchus labrax), carp (Cyprinus carpio) and European eel (Anguilla anguilla). Four major hazards were selected, which were fish species dependent. Critical Control Points (CCPs) that need to be controlled to minimize or avoid the four hazards are presented. For FWAS, monitoring of CCPs at a farm level is essential. For monitoring purposes, Operational Welfare Indicators (OWIs) are needed to establish whether critical biotic, abiotic, managerial and environmental factors are controlled. For the OWIs we present critical limits/target values. A critical limit is the maximum or minimum value to which a factor must be controlled at a critical control point to prevent, eliminate or reduce a hazard to an acceptable level. For managerial factors target levels are more appropriate than critical limits. Regarding the international trade of farmed fish products, we propose that FWAS needs to be standardized in aquaculture chains. For this standardization a consensus on the concept of fish welfare, methods to assess welfare objectively and knowledge on the needs of farmed fish are required.
Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard
This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less
Optimization of airport security lanes
NASA Astrophysics Data System (ADS)
Chen, Lin
2018-05-01
Current airport security management system is widely implemented all around the world to ensure the safety of passengers, but it might not be an optimum one. This paper aims to seek a better security system, which can maximize security while minimize inconvenience to passengers. Firstly, we apply Petri net model to analyze the steps where the main bottlenecks lie. Based on average tokens and time transition, the most time-consuming steps of security process can be found, including inspection of passengers' identification and documents, preparing belongings to be scanned and the process for retrieving belongings back. Then, we develop a queuing model to figure out factors affecting those time-consuming steps. As for future improvement, the effective measures which can be taken include transferring current system as single-queuing and multi-served, intelligently predicting the number of security checkpoints supposed to be opened, building up green biological convenient lanes. Furthermore, to test the theoretical results, we apply some data to stimulate the model. And the stimulation results are consistent with what we have got through modeling. Finally, we apply our queuing model to a multi-cultural background. The result suggests that by quantifying and modifying the variance in wait time, the model can be applied to individuals with various habits customs and habits. Generally speaking, our paper considers multiple affecting factors, employs several models and does plenty of calculations, which is practical and reliable for handling in reality. In addition, with more precise data available, we can further test and improve our models.
de Carvalho, Alberito Rodrigo; Andrade, Alexandro; Peyré-Tartaruga, Leonardo Alexandre
2015-01-01
One goal of the locomotion is to move the body in the space at the most economical way possible. However, little is known about the mechanical and energetic aspects of locomotion that are affected by low back pain. And in case of occurring some damage, little is known about how the mechanical and energetic characteristics of the locomotion are manifested in functional activities, especially with respect to the energy-minimizer mechanisms during locomotion. This study aimed: a) to describe the main energy-minimizer mechanisms of locomotion; b) to check if there are signs of damage on the mechanical and energetic characteristics of the locomotion due to chronic low back pain (CLBP) which may endanger the energy-minimizer mechanisms. This study is characterized as a narrative literature review. The main theory that explains the minimization of energy expenditure during the locomotion is the inverted pendulum mechanism, by which the energy-minimizer mechanism converts kinetic energy into potential energy of the center of mass and vice-versa during the step. This mechanism is strongly influenced by spatio-temporal gait (locomotion) parameters such as step length and preferred walking speed, which, in turn, may be severely altered in patients with chronic low back pain. However, much remains to be understood about the effects of chronic low back pain on the individual's ability to practice an economic locomotion, because functional impairment may compromise the mechanical and energetic characteristics of this type of gait, making it more costly. Thus, there are indications that such changes may compromise the functional energy-minimizer mechanisms. Copyright © 2014 Elsevier Editora Ltda. All rights reserved.
Purification-Free, Target-Selective Immobilization of a Protein from Cell Lysates.
Cha, Jaehyun; Kwon, Inchan
2018-02-27
Protein immobilization has been widely used for laboratory experiments and industrial processes. Preparation of a recombinant protein for immobilization usually requires laborious and expensive purification steps. Here, a novel purification-free, target-selective immobilization technique of a protein from cell lysates is reported. Purification steps are skipped by immobilizing a target protein containing a clickable non-natural amino acid (p-azidophenylalanine) in cell lysates onto alkyne-functionalized solid supports via bioorthogonal azide-alkyne cycloaddition. In order to achieve a target protein-selective immobilization, p-azidophenylalanine was introduced into an exogenous target protein, but not into endogenous non-target proteins using host cells with amber codon-free genomic DNAs. Immobilization of superfolder fluorescent protein (sfGFP) from cell lysates is as efficient as that of the purified sfGFP. Using two fluorescent proteins (sfGFP and mCherry), the authors also demonstrated that the target proteins are immobilized with a minimal immobilization of non-target proteins (target-selective immobilization). © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
A review on wetting and water condensation - Perspectives for CO2 condensation.
Snustad, Ingrid; Røe, Ingeborg T; Brunsvold, Amy; Ervik, Åsmund; He, Jianying; Zhang, Zhiliang
2018-06-01
Liquefaction of vapor is a necessary, but energy intensive step in several important process industries. This review identifies possible materials and surface structures for promoting dropwise condensation, known to increase efficiency of condensation heat transfer. Research on superhydrophobic and superomniphobic surfaces promoting dropwise condensation constitutes the basis of the review. In extension of this, knowledge is extrapolated to condensation of CO 2 . Global emissions of CO 2 need to be minimized in order to reduce global warming, and liquefaction of CO 2 is a necessary step in some carbon capture, transport and storage (CCS) technologies. The review is divided into three main parts: 1) An overview of recent research on superhydrophobicity and promotion of dropwise condensation of water, 2) An overview of recent research on superomniphobicity and dropwise condensation of low surface tension substances, and 3) Suggested materials and surface structures for dropwise CO 2 condensation based on the two first parts. Copyright © 2018 Elsevier B.V. All rights reserved.
Microfluidic Remote Loading for Rapid Single-Step Liposomal Drug Preparation
Hood, R.R.; Vreeland, W. N.; DeVoe, D.L.
2014-01-01
Microfluidic-directed formation of liposomes is combined with in-line sample purification and remote drug loading for single step, continuous-flow synthesis of nanoscale vesicles containing high concentrations of stably loaded drug compounds. Using an on-chip microdialysis element, the system enables rapid formation of large transmembrane pH and ion gradients, followed by immediate introduction of amphipathic drug for real-time remote loading into the liposomes. The microfluidic process enables in-line formation of drug-laden liposomes with drug:lipid molar ratios of up to 1.3, and a total on-chip residence time of approximately 3 min, representing a significant improvement over conventional bulk-scale methods which require hours to days for combined liposome synthesis and remote drug loading. The microfluidic platform may be further optimized to support real-time generation of purified liposomal drug formulations with high concentrations of drugs and minimal reagent waste for effective liposomal drug preparation at or near the point of care. PMID:25003823
Unnatural substrates reveal the importance of 8-oxoguanine for in vivo mismatch repair by MutY
Livingston, Alison L.; O’Shea, Valerie L.; Kim, Taewoo; Kool, Eric T.; David, Sheila S.
2009-01-01
Escherchia coli MutY plays an important role in preventing mutations associated with the oxidative lesion 7,8-dihydro-8-oxo-2′-deoxyguanosine (OG) in DNA by excising adenines from OG:A mismatches as the first step of base excision repair. To determine the importance of specific steps in the base pair recognition and base removal process of MutY, we have evaluated the effects of modifications of the OG:A substrate on the kinetics of base removal, mismatch affinity and repair to G:C in an Escherchia coli-based assay. Surprisingly, adenine modification was tolerated in the cellular assay, while modification of OG results in minimal cellular repair. High affinity for the mismatch and efficient base removal require the presence of OG. Taken together, these results suggest that the presence of OG is a critical feature for MutY to locate OG:A mismatches and select the appropriate adenines for excision to initiate repair in vivo prior to replication. PMID:18026095
Reducing Lead in Drinking Water: A Manual for Minnesota's Schools.
ERIC Educational Resources Information Center
Minnesota State Dept. of Health, St. Paul.
This manual was designed to assist Minnesota's schools in minimizing the consumption of lead in drinking water by students and staff. It offers step-by-step instructions for testing and reducing lead in drinking water. The manual answers: Why is lead a health concern? How are children exposed to lead? Why is lead a special concern for schools? How…
Attractor reconstruction for non-linear systems: a methodological note
Nichols, J.M.; Nichols, J.D.
2001-01-01
Attractor reconstruction is an important step in the process of making predictions for non-linear time-series and in the computation of certain invariant quantities used to characterize the dynamics of such series. The utility of computed predictions and invariant quantities is dependent on the accuracy of attractor reconstruction, which in turn is determined by the methods used in the reconstruction process. This paper suggests methods by which the delay and embedding dimension may be selected for a typical delay coordinate reconstruction. A comparison is drawn between the use of the autocorrelation function and mutual information in quantifying the delay. In addition, a false nearest neighbor (FNN) approach is used in minimizing the number of delay vectors needed. Results highlight the need for an accurate reconstruction in the computation of the Lyapunov spectrum and in prediction algorithms.
Dillon, Neal P; Fichera, Loris; Kesler, Kyle; Zuniga, M Geraldine; Mitchell, Jason E; Webster, Robert J; Labadie, Robert F
2017-09-01
This article presents the development and experimental validation of a methodology to reduce the risk of thermal injury to the facial nerve during minimally invasive cochlear implantation surgery. The first step in this methodology is a pre-operative screening process, in which medical imaging is used to identify those patients that present a significant risk of developing high temperatures at the facial nerve during the drilling phase of the procedure. Such a risk is calculated based on the density of the bone along the drilling path and the thermal conductance between the drilling path and the nerve, and provides a criterion to exclude high-risk patients from receiving the minimally invasive procedure. The second component of the methodology is a drilling strategy for manually-guided drilling near the facial nerve. The strategy utilizes interval drilling and mechanical constraints to enable better control over the procedure and the resulting generation of heat. The approach is tested in fresh cadaver temporal bones using a thermal camera to monitor temperature near the facial nerve. Results indicate that pre-operative screening may successfully exclude high-risk patients and that the proposed drilling strategy enables safe drilling for low-to-moderate risk patients.
NASA Astrophysics Data System (ADS)
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2018-06-01
In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.
Fast visual prediction and slow optimization of preferred walking speed.
O'Connor, Shawn M; Donelan, J Maxwell
2012-05-01
People prefer walking speeds that minimize energetic cost. This may be accomplished by directly sensing metabolic rate and adapting gait to minimize it, but only slowly due to the compounded effects of sensing delays and iterative convergence. Visual and other sensory information is available more rapidly and could help predict which gait changes reduce energetic cost, but only approximately because it relies on prior experience and an indirect means to achieve economy. We used virtual reality to manipulate visually presented speed while 10 healthy subjects freely walked on a self-paced treadmill to test whether the nervous system beneficially combines these two mechanisms. Rather than manipulating the speed of visual flow directly, we coupled it to the walking speed selected by the subject and then manipulated the ratio between these two speeds. We then quantified the dynamics of walking speed adjustments in response to perturbations of the visual speed. For step changes in visual speed, subjects responded with rapid speed adjustments (lasting <2 s) and in a direction opposite to the perturbation and consistent with returning the visually presented speed toward their preferred walking speed, when visual speed was suddenly twice (one-half) the walking speed, subjects decreased (increased) their speed. Subjects did not maintain the new speed but instead gradually returned toward the speed preferred before the perturbation (lasting >300 s). The timing and direction of these responses strongly indicate that a rapid predictive process informed by visual feedback helps select preferred speed, perhaps to complement a slower optimization process that seeks to minimize energetic cost.
Molgenis-impute: imputation pipeline in a box.
Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A
2015-08-19
Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.
SU-E-T-420: Failure Effects Mode Analysis for Trigeminal Neuralgia Frameless Radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howe, J
2015-06-15
Purpose: Functional radiosurgery has been used successfully in the treatment of trigeminal neuralgia but presents significant challenges to ensuring the high prescription dose is delivered accurately. A review of existing practice should help direct the focus of quality improvement for this treatment regime. Method: Failure modes and effects analysis was used to identify the processes in preparing radiosurgery treatment for TN. The map was developed by a multidisciplinary team including: neurosurgeon, radiation oncology, physicist and therapist. Potential failure modes were identified for each step in the process map as well as potential causes and end effect. A risk priority numbermore » was assigned to each cause. Results: The process map identified 66 individual steps (see attached supporting document). Corrective actions were developed for areas of high risk priority number. Wrong site treatment is at higher risk for trigeminal neuralgia treatment due to the lack of site specific pathologic imaging on MR and CT – additional site specific checks were implemented to minimize the risk of wrong site treatment. Failed collision checks resulted from an insufficient collision model in the treatment planning system and a plan template was developed to address this problem. Conclusion: Failure modes and effects analysis is an effective tool for developing quality improvement in high risk radiotherapy procedures such as functional radiosurgery.« less
An optimal adder-based hardware architecture for the DCT/SA-DCT
NASA Astrophysics Data System (ADS)
Kinane, Andrew; Muresan, Valentin; O'Connor, Noel
2005-07-01
The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.
Kim, Daehwan
2018-02-01
A pretreatment of lignocellulosic biomass to produce biofuels, polymers, and other chemicals plays a vital role in the biochemical conversion process toward disrupting the closely associated structures of the cellulose-hemicellulose-lignin molecules. Various pretreatment steps alter the chemical/physical structure of lignocellulosic materials by solubilizing hemicellulose and/or lignin, decreasing the particle sizes of substrate and the crystalline portions of cellulose, and increasing the surface area of biomass. These modifications enhance the hydrolysis of cellulose by increasing accessibilities of acids or enzymes onto the surface of cellulose. However, lignocellulose-derived byproducts, which can inhibit and/or deactivate enzyme and microbial biocatalysts, are formed, including furan derivatives, lignin-derived phenolics, and carboxylic acids. These generation of compounds during pretreatment with inhibitory effects can lead to negative effects on subsequent steps in sugar flat-form processes. A number of physico-chemical pretreatment methods such as steam explosion, ammonia fiber explosion (AFEX), and liquid hot water (LHW) have been suggested and developed for minimizing formation of inhibitory compounds and alleviating their effects on ethanol production processes. This work reviews the physico-chemical pretreatment methods used for various biomass sources, formation of lignocellulose-derived inhibitors, and their contributions to enzymatic hydrolysis and microbial activities. Furthermore, we provide an overview of the current strategies to alleviate inhibitory compounds present in the hydrolysates or slurries.
Andersen, Natalia D.; Srinivas, Shruthi; Piñero, Gonzalo; Monje, Paula V.
2016-01-01
We herein developed a protocol for the rapid procurement of adult nerve-derived Schwann cells (SCs) that was optimized to implement an immediate enzymatic dissociation of fresh nerve tissue while maintaining high cell viability, improving yields and minimizing fibroblast and myelin contamination. This protocol introduces: (1) an efficient method for enzymatic cell release immediately after removal of the epineurium and extensive teasing of the nerve fibers; (2) an adaptable drop-plating method for selective cell attachment, removal of myelin debris, and expansion of the initial SC population in chemically defined medium; (3) a magnetic-activated cell sorting purification protocol for rapid and effective fibroblast elimination; and (4) an optional step of cryopreservation for the storage of the excess of cells. Highly proliferative SC cultures devoid of myelin and fibroblast growth were obtained within three days of nerve processing. Characterization of the initial, expanded, and cryopreserved cell products confirmed maintenance of SC identity, viability and growth rates throughout the process. Most importantly, SCs retained their sensitivity to mitogens and potential for differentiation even after cryopreservation. To conclude, this easy-to-implement and clinically relevant protocol allows for the preparation of expandable homogeneous SC cultures while minimizing time, manipulation of the cells, and exposure to culture variables. PMID:27549422
A framework of knowledge creation processes in participatory simulation of hospital work systems.
Andersen, Simone Nyholm; Broberg, Ole
2017-04-01
Participatory simulation (PS) is a method to involve workers in simulating and designing their own future work system. Existing PS studies have focused on analysing the outcome, and minimal attention has been devoted to the process of creating this outcome. In order to study this process, we suggest applying a knowledge creation perspective. The aim of this study was to develop a framework describing the process of how ergonomics knowledge is created in PS. Video recordings from three projects applying PS of hospital work systems constituted the foundation of process mining analysis. The analysis resulted in a framework revealing the sources of ergonomics knowledge creation as sequential relationships between the activities of simulation participants sharing work experiences; experimenting with scenarios; and reflecting on ergonomics consequences. We argue that this framework reveals the hidden steps of PS that are essential when planning and facilitating PS that aims at designing work systems. Practitioner Summary: When facilitating participatory simulation (PS) in work system design, achieving an understanding of the PS process is essential. By applying a knowledge creation perspective and process mining, we investigated the knowledge-creating activities constituting the PS process. The analysis resulted in a framework of the knowledge-creating process in PS.
Two-step impression/ injection, an alternative putty/ wash impression technique: case report.
Caputi, S; Murmura, G; Sinjari, B; Varvara, G
2012-01-01
We here describe a new technique for making a definitive impression that we refer to as the two-step impression/injection technique. This technique initially follows the classical one-step putty/ light-body impression technique with the polymerization of the putty and the light-body compound. This is then followed by the second step: injection of extra-light-body compound into the preparation through a hole in the metal stock tray. The aim of this additional step is to control the wash bulk and minimize the changes that can produce unfavorable impression results. This new two-step impression/injection technique allows displacement of soft tissues, such as the tongue, during the first seating of the putty and wash materials, while in the second step, the extra-light-body compound records all of the finer details without being compressed.
The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.
Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou
2017-11-01
Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.
NASA Astrophysics Data System (ADS)
Li, Mengmeng; Bijker, Wietske; Stein, Alfred
2015-04-01
Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.
Ten-Step Minimally Invasive Spine Lumbar Decompression and Dural Repair Through Tubular Retractors.
Boukebir, Mohamed Abdelatif; Berlin, Connor David; Navarro-Ramirez, Rodrigo; Heiland, Tim; Schöller, Karsten; Rawanduzy, Cameron; Kirnaz, Sertaç; Jada, Ajit; Härtl, Roger
2017-04-01
Minimally invasive spine (MIS) surgery utilizing tubular retractors has become an increasingly popular approach for decompression in the lumbar spine. However, a better understanding of appropriate indications, efficacious surgical techniques, limitations, and complication management is required to effectively teach the procedure and to facilitate the learning curve. To describe our experience and recommendations regarding tubular surgery for lumbar disc herniations, foraminal compression with unilateral radiculopathy, lumbar spinal stenosis, synovial cysts, and dural repair. We reviewed our experience between 2008 and 2014 to develop a step-by-step description of the surgical techniques and complication management, including dural repair through tubes, for the 4 lumbar pathologies of highest frequency. We provide additional supplementary videos for dural tear repair, laminotomy for bilateral decompression, and synovial cyst resection. Our overview and complementary materials document the key technical details to maximize the success of the 4 MIS surgical techniques. The review of our experience in 331 patients reveals technical feasibility as well as satisfying clinical results, with no postoperative complications associated with cerebrospinal fluid leaks, 1 infection, and 17 instances (5.1%) of delayed fusion. MIS surgery through tubular retractors is a safe and effective alternative to traditional open or microsurgical techniques for the treatment of lumbar degenerative disease. Adherence to strict microsurgical techniques will allow the surgeon to effectively address bilateral pathology while preserving stability and minimizing complications. Copyright © 2017 by the Congress of Neurological Surgeons
Li, Jinjian; Dridi, Mahjoub; El-Moudni, Abdellah
2016-01-01
The problem of reducing traffic delays and decreasing fuel consumption simultaneously in a network of intersections without traffic lights is solved by a cooperative traffic control algorithm, where the cooperation is executed based on the connection of Vehicle-to-Infrastructure (V2I). This resolution of the problem contains two main steps. The first step concerns the itinerary of which intersections are chosen by vehicles to arrive at their destination from their starting point. Based on the principle of minimal travel distance, each vehicle chooses its itinerary dynamically based on the traffic loads in the adjacent intersections. The second step is related to the following proposed cooperative procedures to allow vehicles to pass through each intersection rapidly and economically: on one hand, according to the real-time information sent by vehicles via V2I in the edge of the communication zone, each intersection applies Dynamic Programming (DP) to cooperatively optimize the vehicle passing sequence with minimal traffic delays so that the vehicles may rapidly pass the intersection under the relevant safety constraints; on the other hand, after receiving this sequence, each vehicle finds the optimal speed profiles with the minimal fuel consumption by an exhaustive search. The simulation results reveal that the proposed algorithm can significantly reduce both travel delays and fuel consumption compared with other papers under different traffic volumes. PMID:27999333
Heo, Yun Seok; Lee, Ho-Joon; Hassell, Bryan A; Irimia, Daniel; Toth, Thomas L; Elmoazzen, Heidi; Toner, Mehmet
2011-10-21
Oocyte cryopreservation has become an essential tool in the treatment of infertility by preserving oocytes for women undergoing chemotherapy. However, despite recent advances, pregnancy rates from all cryopreserved oocytes remain low. The inevitable use of the cryoprotectants (CPAs) during preservation affects the viability of the preserved oocytes and pregnancy rates either through CPA toxicity or osmotic injury. Current protocols attempt to reduce CPA toxicity by minimizing CPA concentrations, or by minimizing the volume changes via the step-wise addition of CPAs to the cells. Although the step-wise addition decreases osmotic shock to oocytes, it unfortunately increases toxic injuries due to the long exposure times to CPAs. To address limitations of current protocols and to rationally design protocols that minimize the exposure to CPAs, we developed a microfluidic device for the quantitative measurements of oocyte volume during various CPA loading protocols. We spatially secured a single oocyte on the microfluidic device, created precisely controlled continuous CPA profiles (step-wise, linear and complex) for the addition of CPAs to the oocyte and measured the oocyte volumetric response to each profile. With both linear and complex profiles, we were able to load 1.5 M propanediol to oocytes in less than 15 min and with a volumetric change of less than 10%. Thus, we believe this single oocyte analysis technology will eventually help future advances in assisted reproductive technologies and fertility preservation.
Footwear characteristics are related to running mechanics in runners with patellofemoral pain.
Esculier, Jean-Francois; Dubois, Blaise; Bouyer, Laurent J; McFadyen, Bradford J; Roy, Jean-Sébastien
2017-05-01
Running footwear is known to influence step rate, foot inclination at foot strike, average vertical loading rate (VLR) and peak patellofemoral joint (PFJ) force. However, the association between the level of minimalism of running shoes and running mechanics, especially with regards to these relevant variables for runners with patellofemoral pain (PFP), has yet to be investigated. The objective of this study was to explore the relationship between the level of minimalism of running shoes and habitual running kinematics and kinetics in runners with PFP. Running shoes of 69 runners with PFP (46 females, 23 males, 30.7±6.4years) were evaluated using the Minimalist Index (MI). Kinematic and kinetic data were collected during running on an instrumented treadmill. Principal component and correlation analyses were performed between the MI and its subscales and step rate, foot inclination at foot strike, average VLR, peak PFJ force and peak Achilles tendon force. Higher MI scores were moderately correlated with lower foot inclination (r=-0.410, P<0.001) and lower peak PFJ force (r=-0.412, P<0.001). Moderate correlations also showed that lower shoe mass is indicative of greater step rate (ρ=0.531, P<0.001) and lower peak PFJ force (ρ=-0.481, P<0.001). Greater shoe flexibility was moderately associated with lower foot inclination (ρ=-0.447, P<0.001). Results suggest that greater levels of minimalism are associated with lower inclination angle and lower peak PFJ force in runners with PFP. Thus, this population may potentially benefit from changes in running mechanics associated with the use of shoes with a higher level of minimalism. Copyright © 2017 Elsevier B.V. All rights reserved.
Pain: A content review of undergraduate pre-registration nurse education in the United Kingdom.
Mackintosh-Franklin, Carolyn
2017-01-01
Pain is a global health issue with poor assessment and management of pain associated with serious disability and detrimental socio economic consequences. Pain is also a closely associated symptom of the three major causes of death in the developed world; Coronary Heart Disease, Stroke and Cancer. There is a significant body of work which indicates that current nursing practice has failed to address pain as a priority, resulting in poor practice and unnecessary patient suffering. Additionally nurse education appears to lack focus or emphasis on the importance of pain assessment and its management. A three step online search process was carried out across 71 Higher Education Institutes (HEIs) in the United Kingdom (UK) which deliver approved undergraduate nurse education programmes. Step one to find detailed programme documentation, step 2 to find reference to pain in the detailed documents and step 3 to find reference to pain in nursing curricula across all UK HEI websites, using Google and each HEIs site specific search tool. The word pain featured minimally in programme documents with 9 (13%) documents making reference to it, this includes 3 occurrences which were not relevant to the programme content. The word pain also featured minimally in the content of programmes/modules on the website search, with no references at all to pain in undergraduate pre-registration nursing programmes. Those references found during the website search were for continuing professional development (CPD) or Masters level programmes. In spite of the global importance of pain as a major health issue both in its own right, and as a significant symptom of leading causes of death and illness, pain appears to be a neglected area within the undergraduate nursing curriculum. Evidence suggests that improving nurse education in this area can have positive impacts on clinical practice, however without educational input the current levels of poor practice are unlikely to improve and unnecessary patient suffering will continue. Undergraduate nurse education in the UK needs to review its current approach to content and ensure that pain is appropriately and prominently featured within pre-registration nurse education. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wong, Jeremy D; O'Connor, Shawn M; Selinger, Jessica C; Donelan, J Maxwell
2017-08-01
People can adapt their gait to minimize energetic cost, indicating that walking's neural control has access to ongoing measurements of the body's energy use. In this study we tested the hypothesis that an important source of energetic cost measurements arises from blood gas receptors that are sensitive to O 2 and CO 2 concentrations. These receptors are known to play a role in regulating other physiological processes related to energy consumption, such as ventilation rate. Given the role of O 2 and CO 2 in oxidative metabolism, sensing their levels can provide an accurate estimate of the body's total energy use. To test our hypothesis, we simulated an added energetic cost for blood gas receptors that depended on a subject's step frequency and determined if subjects changed their behavior in response to this simulated cost. These energetic costs were simulated by controlling inspired gas concentrations to decrease the circulating levels of O 2 and increase CO 2 We found this blood gas control to be effective at shifting the step frequency that minimized the ventilation rate and perceived exertion away from the normally preferred frequency, indicating that these receptors provide the nervous system with strong physiological and psychological signals. However, rather than adapt their preferred step frequency toward these lower simulated costs, subjects persevered at their normally preferred frequency even after extensive experience with the new simulated costs. These results suggest that blood gas receptors play a negligible role in sensing energetic cost for the purpose of optimizing gait. NEW & NOTEWORTHY Human gait adaptation implies that the nervous system senses energetic cost, yet this signal is unknown. We tested the hypothesis that the blood gas receptors sense cost for gait optimization by controlling blood O 2 and CO 2 with step frequency as people walked. At the simulated energetic minimum, ventilation and perceived exertion were lowest, yet subjects preferred walking at their original frequency. This suggests that blood gas receptors are not critical for sensing cost during gait. Copyright © 2017 the American Physiological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milner, Phillip J.; Martell, Jeffrey D.; Siegelman, Rebecca L.
Alkyldiamine-functionalized variants of the metal–organic framework Mg 2(dobpdc) (dobpdc 4- = 4,4'-dioxidobiphenyl-3,3'-dicarboxylate) are promising for CO 2 capture applications owing to their unique step-shaped CO 2 adsorption profiles resulting from the cooperative formation of ammonium carbamate chains. Primary,secondary (1°,2°) alkylethylenediamine-appended variants are of particular interest because of their low CO 2 step pressures (≤1 mbar at 40 °C), minimal adsorption/desorption hysteresis, and high thermal stability. Herein, we demonstrate that further increasing the size of the alkyl group on the secondary amine affords enhanced stability against diamine volatilization, but also leads to surprising two-step CO 2 adsorption/desorption profiles. This two-step behaviormore » likely results from steric interactions between ammonium carbamate chains induced by the asymmetrical hexagonal pores of Mg 2(dobpdc) and leads to decreased CO 2 working capacities and increased water co-adsorption under humid conditions. To minimize these unfavorable steric interactions, we targeted diamine-appended variants of the isoreticularly expanded framework Mg 2(dotpdc) (dotpdc 4- = 4,4''-dioxido-[1,1':4',1''-terphenyl]-3,3''-dicarboxylate), reported here for the first time, and the previously reported isomeric framework Mg-IRMOF-74-II or Mg 2(pc-dobpdc) (pc-dobpdc 4- = 3,3'-dioxidobiphenyl-4,4'-dicarboxylate, pc = para-carboxylate), which, in contrast to Mg 2(dobpdc), possesses uniformally hexagonal pores. By minimizing the steric interactions between ammonium carbamate chains, these frameworks enable a single CO 2 adsorption/desorption step in all cases, as well as decreased water co-adsorption and increased stability to diamine loss. Functionalization of Mg 2(pc-dobpdc) with large diamines such as N-(n-heptyl)ethylenediamine results in optimal adsorption behavior, highlighting the advantage of tuning both the pore shape and the diamine size for the development of new adsorbents for carbon capture applications.« less
Mambulu-Chikankheni, Faith Nankasa; Eyles, John; Eboreime, Ejemai Amaize; Ditlopo, Prudence
2017-10-18
Focusing on healthcare referral processes for children with severe acute malnutrition (SAM) in South Africa, this paper discusses the comprehensiveness of documents (global and national) that guide the country's SAM healthcare. This research is relevant because South African studies on SAM mostly examine the implementation of WHO guidelines in hospitals, making their technical relevance to the country's lower level and referral healthcare system under-explored. To add to both literature and methods for studying SAM healthcare, we critically appraised four child healthcare guidelines (global and national) and conducted complementary expert interviews (n = 5). Combining both methods enabled us to examine the comprehensiveness of the documents as related to guiding SAM healthcare within the country's referral system as well as the credibility (rigour and stakeholder representation) of the guideline documents' development process. None of the guidelines appraised covered all steps of SAM referrals; however, each addressed certain steps thoroughly, apart from transit care. Our study also revealed that national documents were mostly modelled after WHO guidelines but were not explicitly adapted to local context. Furthermore, we found most guidelines' formulation processes to be unclear and stakeholder involvement in the process to be minimal. In adapting guidelines for management of SAM in South Africa, it is important that local context applicability is taken into consideration. In doing this, wider stakeholder involvement is essential; this is important because factors that affect SAM management go beyond in-hospital care. Community, civil society, medical and administrative involvement during guideline formulation processes will enhance acceptability and adherence to the guidelines.
Design of a novel automated methanol feed system for pilot-scale fermentation of Pichia pastoris.
Hamaker, Kent H; Johnson, Daniel C; Bellucci, Joseph J; Apgar, Kristie R; Soslow, Sherry; Gercke, John C; Menzo, Darrin J; Ton, Christopher
2011-01-01
Large-scale fermentation of Pichia pastoris requires a large volume of methanol feed during the induction phase. However, a large volume of methanol feed is difficult to use in the processing suite because of the inconvenience of constant monitoring, manual manipulation steps, and fire and explosion hazards. To optimize and improve safety of the methanol feed process, a novel automated methanol feed system has been designed and implemented for industrial fermentation of P. pastoris. Details of the design of the methanol feed system are described. The main goals of the design were to automate the methanol feed process and to minimize the hazardous risks associated with storing and handling large quantities of methanol in the processing area. The methanol feed system is composed of two main components: a bulk feed (BF) system and up to three portable process feed (PF) systems. The BF system automatically delivers methanol from a central location to the portable PF system. The PF system provides precise flow control of linear, step, or exponential feed of methanol to the fermenter. Pilot-scale fermentations with linear and exponential methanol feeds were conducted using two Mut(+) (methanol utilization plus) strains, one expressing a recombinant therapeutic protein and the other a monoclonal antibody. Results show that the methanol feed system is accurate, safe, and efficient. The feed rates for both linear and exponential feed methods were within ± 5% of the set points, and the total amount of methanol fed was within 1% of the targeted volume. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
Modelling interactions between mitigation, adaptation and sustainable development
NASA Astrophysics Data System (ADS)
Reusser, D. E.; Siabatto, F. A. P.; Garcia Cantu Ros, A.; Pape, C.; Lissner, T.; Kropp, J. P.
2012-04-01
Managing the interdependence of climate mitigation, adaptation and sustainable development requires a good understanding of the dominant socioecological processes that have determined the pathways in the past. Key variables include water and food availability which depend on climate and overall ecosystem services, as well as energy supply and social, political and economic conditions. We present our initial steps to build a system dynamic model of nations that represents a minimal set of relevant variables of the socio- ecological development. The ultimate goal of the modelling exercise is to derive possible future scenarios and test those for their compatibility with sustainability boundaries. Where dynamics go beyond sustainability boundaries intervention points in the dynamics can be searched.
USHPRR FUEL FABRICATION PILLAR: FABRICATION STATUS, PROCESS OPTIMIZATIONS, AND FUTURE PLANS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wight, Jared M.; Joshi, Vineet V.; Lavender, Curt A.
The Fuel Fabrication (FF) Pillar, a project within the U.S. High Performance Research Reactor Conversion program of the National Nuclear Security Administration’s Office of Material Management and Minimization, is tasked with the scale-up and commercialization of high-density monolithic U-Mo fuel for the conversion of appropriate research reactors to use of low-enriched fuel. The FF Pillar has made significant steps to demonstrate and optimize the baseline co-rolling process using commercial-scale equipment at both the Y-12 National Security Complex (Y-12) and BWX Technologies (BWXT). These demonstrations include the fabrication of the next irradiation experiment, Mini-Plate 1 (MP-1), and casting optimizations at Y-12.more » The FF Pillar uses a detailed process flow diagram to identify potential gaps in processing knowledge or demonstration, which helps direct the strategic research agenda of the FF Pillar. This paper describes the significant progress made toward understanding the fuel characteristics, and models developed to make informed decisions, increase process yield, and decrease lifecycle waste and costs.« less
Semismooth Newton method for gradient constrained minimization problem
NASA Astrophysics Data System (ADS)
Anyyeva, Serbiniyaz; Kunisch, Karl
2012-08-01
In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.
Optimal design of neural stimulation current waveforms.
Halpern, Mark
2009-01-01
This paper contains results on the design of electrical signals for delivering charge through electrodes to achieve neural stimulation. A generalization of the usual constant current stimulation phase to a stepped current waveform is presented. The electrode current design is then formulated as the calculation of the current step sizes to minimize the peak electrode voltage while delivering a specified charge in a given number of time steps. This design problem can be formulated as a finite linear program, or alternatively by using techniques for discrete-time linear system design.
Integrated system for the destruction of organics by hydrolysis and oxidation with peroxydisulfate
Cooper, John F.; Balazs, G. Bryan; Hsu, Peter; Lewis, Patricia R.; Adamson, Martyn G.
2000-01-01
An integrated system for destruction of organic waste comprises a hydrolysis step at moderate temperature and pressure, followed by direct chemical oxidation using peroxydisulfate. This system can be used to quantitatively destroy volatile or water-insoluble halogenated organic solvents, contaminated soils and sludges, and the organic component of mixed waste. The hydrolysis step results in a substantially single phase of less volatile, more water soluble hydrolysis products, thus enabling the oxidation step to proceed rapidly and with minimal loss of organic substrate in the off-gas.
Advances in primary recovery: centrifugation and membrane technology.
Roush, David J; Lu, Yuefeng
2008-01-01
Significant and continual improvements in upstream processing for biologics have resulted in challenges for downstream processing, both primary recovery and purification. Given the high cell densities achievable in both microbial and mammalian cell culture processes, primary recovery can be a significant bottleneck in both clinical and commercial manufacturing. The combination of increased product titer and low viability leads to significant relative increases in the levels of process impurities such as lipids, intracellular proteins and nucleic acid versus the product. In addition, cell culture media components such as soy and yeast hydrolysates have been widely applied to achieve the cell culture densities needed for higher titers. Many of the process impurities can be negatively charged at harvest pH and can form colloids during the cell culture and harvest processes. The wide size distribution of these particles and the potential for additional particles to be generated by shear forces within a centrifuge may result in insufficient clarification to prevent fouling of subsequent filters. The other residual process impurities can lead to precipitation and increased turbidity during processing and even interference with the performance of the capturing chromatographic step. Primary recovery also poses significant challenges owing to the necessity to execute in an expedient manner to minimize both product degradation and bioburden concerns. Both microfiltration and centrifugation coupled with depth filtration have been employed successfully as primary recovery processing steps. Advances in the design and application of membrane technology for microfiltration and dead-end filtration have contributed to significant improvements in process performance and integration, in some cases allowing for a combination of multiple unit operations in a given step. Although these advances have increased productivity and reliability, the net result is that optimization of primary recovery processes has become substantially more complicated. Ironically, the application of classical chemical engineering approaches to overcome issues in primary recovery and purification (e.g., turbidity and trace impurity removal) are just recently gaining attention. Some of these techniques (e.g., membrane cascades, pretreatment, precipitation, and the use of affinity tags) are now seen almost as disruptive technologies. This paper will review the current and potential future state of research on primary recovery, including relevant papers presented at the 234th American Chemical Society (ACS) National Meeting in Boston.
Documet, Jorge; Le, Anh; Liu, Brent; Chiu, John; Huang, H K
2010-05-01
This paper presents the concept of bridging the gap between diagnostic images and image-assisted surgical treatment through the development of a one-stop multimedia electronic patient record (ePR) system that manages and distributes the real-time multimodality imaging and informatics data that assists the surgeon during all clinical phases of the operation from planning Intra-Op to post-care follow-up. We present the concept of this multimedia ePR for surgery by first focusing on image-assisted minimally invasive spinal surgery as a clinical application. Three clinical phases of minimally invasive spinal surgery workflow in Pre-Op, Intra-Op, and Post-Op are discussed. The ePR architecture was developed based on the three-phased workflow, which includes the Pre-Op, Intra-Op, and Post-Op modules and four components comprising of the input integration unit, fault-tolerant gateway server, fault-tolerant ePR server, and the visualization and display. A prototype was built and deployed to a minimally invasive spinal surgery clinical site with user training and support for daily use. A step-by-step approach was introduced to develop a multimedia ePR system for imaging-assisted minimally invasive spinal surgery that includes images, clinical forms, waveforms, and textual data for planning the surgery, two real-time imaging techniques (digital fluoroscopic, DF) and endoscope video images (Endo), and more than half a dozen live vital signs of the patient during surgery. Clinical implementation experiences and challenges were also discussed.
Scanning tunneling microscope with a rotary piezoelectric stepping motor
NASA Astrophysics Data System (ADS)
Yakimov, V. N.
1996-02-01
A compact scanning tunneling microscope (STM) with a novel rotary piezoelectric stepping motor for coarse positioning has been developed. An inertial method for rotating of the rotor by the pair of piezoplates has been used in the piezomotor. Minimal angular step size was about several arcsec with the spindle working torque up to 1 N×cm. Design of the STM was noticeably simplified by utilization of the piezomotor with such small step size. A shaft eccentrically attached to the piezomotor spindle made it possible to push and pull back the cylindrical bush with the tubular piezoscanner. A linear step of coarse positioning was about 50 nm. STM resolution in vertical direction was better than 0.1 nm without an external vibration isolation.
Uchendu, Esther E; Leonard, Scott W; Traber, Maret G; Reed, Barbara M
2010-01-01
Oxidative processes involved in cryopreservation protocols may be responsible for the reduced viability of tissues after liquid nitrogen exposure. Antioxidants that counteract these reactions should improve recovery. This study focused on oxidative lipid injury and the effects of exogenous vitamin E (tocopherol, Vit E) and vitamin C (ascorbic acid, Vit C) treatments on regrowth at four critical steps of the plant vitrification solution number 2 (PVS2) vitrification cryopreservation technique; pretreatment, loading, rinsing, and regrowth. Initial experiments showed that Vit E at 11-15 mM significantly increased regrowth (P < 0.001) when added at any of the four steps. There was significantly more malondialdehyde (MDA), a lipid peroxidation product, at each of the steps than in fresh untreated shoot tips. Vit E uptake was assayed at each step and showed significantly more alpha- and gamma-tocopherols in treated shoots than those without Vit E. Vit E added at each step significantly reduced MDA formation and improved shoot regrowth. Vit C (0.14-0.58 mM) also significantly improved regrowth of shoot tips at each step compared to the controls. Regrowth medium with high iron concentrations and Vit C decreased recovery. However, in iron-free medium, Vit C significantly improved recovery. Treatments with Vit E (11 mM) and Vit C (0.14 mM) combined were not significantly better than Vit C alone. We recommend adding Vit C (0.28 mM) to the pretreatment medium, the loading solution or the rinse solution in the PVS2 vitrification protocol. This is the first report of the application of vitamins for improving cryopreservation of plant tissues by minimizing oxidative damage.
Woo, Nain; Kim, Su-Kang; Sun, Yucheng; Kang, Seong Ho
2018-01-01
Human apolipoprotein E (ApoE) is associated with high cholesterol levels, coronary artery disease, and especially Alzheimer's disease. In this study, we developed an ApoE genotyping and one-step multiplex polymerase chain reaction (PCR) based-capillary electrophoresis (CE) method for the enhanced diagnosis of Alzheimer's. The primer mixture of ApoE genes enabled the performance of direct one-step multiplex PCR from whole blood without DNA purification. The combination of direct ApoE genotyping and one-step multiplex PCR minimized the risk of DNA loss or contamination due to the process of DNA purification. All amplified PCR products with different DNA lengths (112-, 253-, 308-, 444-, and 514-bp DNA) of the ApoE genes were analyzed within 2min by an extended voltage programming (VP)-based CE under the optimal conditions. The extended VP-based CE method was at least 120-180 times faster than conventional slab gel electrophoresis methods In particular, all amplified DNA fragments were detected in less than 10 PCR cycles using a laser-induced fluorescence detector. The detection limits of the ApoE genes were 6.4-62.0pM, which were approximately 100-100,000 times more sensitive than previous Alzheimer's diagnosis methods In addition, the combined one-step multiplex PCR and extended VP-based CE method was also successfully applied to the analysis of ApoE genotypes in Alzheimer's patients and normal samples and confirmed the distribution probability of allele frequencies. This combination of direct one-step multiplex PCR and an extended VP-based CE method should increase the diagnostic reliability of Alzheimer's with high sensitivity and short analysis time even with direct use of whole blood. Copyright © 2017 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-01-01
... to exceed three pages, that contains: (1) A brief description of the proposed action, including a... applicable floodplain protection standards; and (5) A brief description of steps to be taken to minimize...
Code of Federal Regulations, 2012 CFR
2012-01-01
... to exceed three pages, that contains: (1) A brief description of the proposed action, including a... applicable floodplain protection standards; and (5) A brief description of steps to be taken to minimize...
Code of Federal Regulations, 2013 CFR
2013-01-01
... to exceed three pages, that contains: (1) A brief description of the proposed action, including a... applicable floodplain protection standards; and (5) A brief description of steps to be taken to minimize...
Boutagy, Nabil E; Rogers, George W; Pyne, Emily S; Ali, Mostafa M; Hulver, Matthew W; Frisard, Madlyn I
2015-10-30
Skeletal muscle mitochondria play a specific role in many disease pathologies. As such, the measurement of oxygen consumption as an indicator of mitochondrial function in this tissue has become more prevalent. Although many technologies and assays exist that measure mitochondrial respiratory pathways in a variety of cells, tissue and species, there is currently a void in the literature in regards to the compilation of these assays using isolated mitochondria from mouse skeletal muscle for use in microplate based technologies. Importantly, the use of microplate based respirometric assays is growing among mitochondrial biologists as it allows for high throughput measurements using minimal quantities of isolated mitochondria. Therefore, a collection of microplate based respirometric assays were developed that are able to assess mechanistic changes/adaptations in oxygen consumption in a commonly used animal model. The methods presented herein provide step-by-step instructions to perform these assays with an optimal amount of mitochondrial protein and reagents, and high precision as evidenced by the minimal variance across the dynamic range of each assay.
Fisher, E R; Sass, R; Fisher, B
1985-09-01
Investigation of the biologic significance of delay between biopsy and mastectomy was performed upon women with invasive carcinoma of the breast in protocol four of the NSABP. Since the period of delay was two weeks or less in approximately 75 per cent, no comment concerning the possible effects of longer periods can be made. Life table analyses failed to reveal any difference in ten year survival rates between patients undergoing radical mastectomy management by the one and two step procedures. Similarly, no difference in adjusted ten year survival rate was observed between women managed by the two step procedure who did or did not have residual tumor identified in the mastectomy specimen after the first step or biopsy. Importantly, the clinical or pathologic stages, sizes of tumor or histologic grades were similar in women managed by the one and two step procedures minimizing selection bias. The material used also allowed for study of the possible causative role of biopsy of the breast on the development of sinus histiocytosis in regional axillary lymph nodes. No difference in degree or types of this nodal reaction could be discerned in the lymph nodes of the mastectomy specimens obtained from patients who had undergone the one and two step procedures. This finding indicates that nodal sinus histiocytosis is indeed related to the neoplastic process, albeit in an undefined manner, rather than the trauma of biopsy per se as has been suggested. These results do not invalidate the use of the one step procedure in the management of patients with carcinoma of the breast. Indeed, it is highly likely that it will be commonly used now that breast-conserving operations appear to represent a viable alternative modality for the primary surgical treatment of carcinoma of the breast. Yet, it is apparent that the one step procedure will be performed for technical and practical rather than biologic reasons.
Cost-effective masks for deep x-ray lithography
NASA Astrophysics Data System (ADS)
Scheunemann, Heinz-Ulrich; Loechel, Bernd; Jian, Linke; Schondelmaier, Daniel; Desta, Yohannes M.; Goettert, Jost
2003-04-01
The production of X-ray masks is one of the key techniques for X-ray lithography and the LIGA process. Different ways for the fabrication of X-ray masks has been established. Very sophisticated, difficult and expensive procedures are required to produce high precision and high quality X-ray masks. In order to minimize the cost of an X-ray mask, the mask blank must be inexpensive and readily available. The steps involved in the fabrication process must also be minimal. In the past, thin membranes made of titanium, silicon carbide, silicon nitride (2-5μm) or thick beryllium substrates (500μm) have been used as mask blanks. Thin titanium and silicon compounds have very high transparency for X-rays; therefore, these materials are predestined for use as mask membrane material. However, the handling and fabrication of thin membranes is very difficult, thus expensive. Beryllium is highly transparent to X-rays, but the processing and use of beryllium is risky due to potential toxicity. During the past few years graphite based X-ray masks have been in use at various research centers, but the sidewall quality of the generated resist patterns is in the range of 200-300 nm Ra. We used polished graphite to improve the sidewall roughness, but polished graphite causes other problems in the fabrication of X-ray masks. This paper describes the advantages associated with the use of polished graphite as mask blank as well as the fabrication process for this low cost X-ray mask. Alternative membrane materials will also be discussed.
Minimizing data transfer with sustained performance in wireless brain-machine interfaces
NASA Astrophysics Data System (ADS)
Thor Thorbergsson, Palmi; Garwicz, Martin; Schouenborg, Jens; Johansson, Anders J.
2012-06-01
Brain-machine interfaces (BMIs) may be used to investigate neural mechanisms or to treat the symptoms of neurological disease and are hence powerful tools in research and clinical practice. Wireless BMIs add flexibility to both types of applications by reducing movement restrictions and risks associated with transcutaneous leads. However, since wireless implementations are typically limited in terms of transmission capacity and energy resources, the major challenge faced by their designers is to combine high performance with adaptations to limited resources. Here, we have identified three key steps in dealing with this challenge: (1) the purpose of the BMI should be clearly specified with regard to the type of information to be processed; (2) the amount of raw input data needed to fulfill the purpose should be determined, in order to avoid over- or under-dimensioning of the design; and (3) processing tasks should be allocated among the system parts such that all of them are utilized optimally with respect to computational power, wireless link capacity and raw input data requirements. We have focused on step (2) under the assumption that the purpose of the BMI (step 1) is to assess single- or multi-unit neuronal activity in the central nervous system with single-channel extracellular recordings. The reliability of this assessment depends on performance in detection and sorting of spikes. We have therefore performed absolute threshold spike detection and spike sorting with the principal component analysis and fuzzy c-means on a set of synthetic extracellular recordings, while varying the sampling rate and resolution, noise level and number of target units, and used the known ground truth to quantitatively estimate the performance. From the calculated performance curves, we have identified the sampling rate and resolution breakpoints, beyond which performance is not expected to increase by more than 1-5%. We have then estimated the performance of alternative algorithms for spike detection and spike sorting in order to examine the generalizability of our results to other algorithms. Our findings indicate that the minimization of recording noise is the primary factor to consider in the design process. In most cases, there are breakpoints for sampling rates and resolution that provide guidelines for BMI designers in terms of minimum amount raw input data that guarantees sustained performance. Such guidelines are essential during system dimensioning. Based on these findings we conclude by presenting a quantitative task-allocation scheme that can be followed to achieve optimal utilization of available resources.
Minimizing data transfer with sustained performance in wireless brain-machine interfaces.
Thorbergsson, Palmi Thor; Garwicz, Martin; Schouenborg, Jens; Johansson, Anders J
2012-06-01
Brain-machine interfaces (BMIs) may be used to investigate neural mechanisms or to treat the symptoms of neurological disease and are hence powerful tools in research and clinical practice. Wireless BMIs add flexibility to both types of applications by reducing movement restrictions and risks associated with transcutaneous leads. However, since wireless implementations are typically limited in terms of transmission capacity and energy resources, the major challenge faced by their designers is to combine high performance with adaptations to limited resources. Here, we have identified three key steps in dealing with this challenge: (1) the purpose of the BMI should be clearly specified with regard to the type of information to be processed; (2) the amount of raw input data needed to fulfill the purpose should be determined, in order to avoid over- or under-dimensioning of the design; and (3) processing tasks should be allocated among the system parts such that all of them are utilized optimally with respect to computational power, wireless link capacity and raw input data requirements. We have focused on step (2) under the assumption that the purpose of the BMI (step 1) is to assess single- or multi-unit neuronal activity in the central nervous system with single-channel extracellular recordings. The reliability of this assessment depends on performance in detection and sorting of spikes. We have therefore performed absolute threshold spike detection and spike sorting with the principal component analysis and fuzzy c-means on a set of synthetic extracellular recordings, while varying the sampling rate and resolution, noise level and number of target units, and used the known ground truth to quantitatively estimate the performance. From the calculated performance curves, we have identified the sampling rate and resolution breakpoints, beyond which performance is not expected to increase by more than 1-5%. We have then estimated the performance of alternative algorithms for spike detection and spike sorting in order to examine the generalizability of our results to other algorithms. Our findings indicate that the minimization of recording noise is the primary factor to consider in the design process. In most cases, there are breakpoints for sampling rates and resolution that provide guidelines for BMI designers in terms of minimum amount raw input data that guarantees sustained performance. Such guidelines are essential during system dimensioning. Based on these findings we conclude by presenting a quantitative task-allocation scheme that can be followed to achieve optimal utilization of available resources.
Raedeke, Thomas D; Dlugonski, Deirdre
2017-12-01
This study was designed to compare a low versus high theoretical fidelity pedometer intervention applying social-cognitive theory on step counts and self-efficacy. Fifty-six public university employees participated in a 10-week randomized controlled trial with 2 conditions that varied in theoretical fidelity. Participants in the high theoretical fidelity condition wore a pedometer and participated in a weekly group walk followed by a meeting to discuss cognitive-behavioral strategies targeting self-efficacy. Participants in the low theoretical fidelity condition met for a group walk and also used a pedometer as a motivational tool and to monitor steps. Step counts were assessed throughout the 10-week intervention and after a no-treatment follow-up (20 weeks and 30 weeks). Self-efficacy was measured preintervention and postintervention. Participants in the high theoretical fidelity condition increased daily steps by 2,283 from preintervention to postintervention, whereas participants in the low fidelity condition demonstrated minimal change during the same time period (p = .002). Individuals attending at least 80% of the sessions in the high theoretical fidelity condition showed an increase of 3,217 daily steps (d = 1.03), whereas low attenders increased by 925 (d = 0.40). Attendance had minimal impact in the low theoretical fidelity condition. Follow-up data revealed that step counts were at least somewhat maintained. For self-efficacy, participants in the high, compared with those in the low, theoretical fidelity condition showed greater improvements. Findings highlight the importance of basing activity promotion efforts on theory. The high theoretical fidelity intervention that included cognitive-behavioral strategies targeting self-efficacy was more effective than the low theoretical fidelity intervention, especially for those with high attendance.
Numerical modeling of solar irradiance on earth's surface
NASA Astrophysics Data System (ADS)
Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.
2016-05-01
Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.
Brown, Guy C
2010-10-01
Control analysis can be used to try to understand why (quantitatively) systems are the way that they are, from rate constants within proteins to the relative amount of different tissues in organisms. Many biological parameters appear to be optimized to maximize rates under the constraint of minimizing space utilization. For any biological process with multiple steps that compete for control in series, evolution by natural selection will tend to even out the control exerted by each step. This is for two reasons: (i) shared control maximizes the flux for minimum protein concentration, and (ii) the selection pressure on any step is proportional to its control, and selection will, by increasing the rate of a step (relative to other steps), decrease its control over a pathway. The control coefficient of a parameter P over fitness can be defined as (∂N/N)/(∂P/P), where N is the number of individuals in the population, and ∂N is the change in that number as a result of the change in P. This control coefficient is equal to the selection pressure on P. I argue that biological systems optimized by natural selection will conform to a principle of sufficiency, such that the control coefficient of all parameters over fitness is 0. Thus in an optimized system small changes in parameters will have a negligible effect on fitness. This principle naturally leads to (and is supported by) the dominance of wild-type alleles over null mutants.
Mazaheri, Masood; Negahban, Hossein; Soltani, Maryam; Mehravar, Mohammad; Tajali, Shirin; Hessam, Masumeh; Salavati, Mahyar; Kingma, Idsart
2017-08-01
The present experiment was conducted to examine the hypothesis that challenging control through narrow-base walking and/or dual tasking affects ACL-injured adults more than healthy control adults. Twenty male ACL-injured adults and twenty healthy male adults walked on a treadmill at a comfortable speed under two base-of-support conditions, normal-base versus narrow-base, with and without a cognitive task. Gait patterns were assessed using mean and variability of step length and mean and variability of step velocity. Cognitive performance was assessed using the number of correct counts in a backward counting task. Narrow-base walking resulted in a larger decrease in step length and a more pronounced increase in variability of step length and of step velocity in ACL-injured adults than in healthy adults. For most of the gait parameters and for backward counting performance, the dual-tasking effect was similar between the two groups. ACL-injured adults adopt a more conservative and more unstable gait pattern during narrow-base walking. This can be largely explained by deficits of postural control in ACL-injured adults, which impairs gait under more balance-demanding conditions. The observation that the dual-tasking effect did not differ between the groups may be explained by the fact that walking is an automatic process that involves minimal use of attentional resources, even after ACL injury. Clinicians should consider the need to include aspects of terrain complexity, such as walking on a narrow walkway, in gait assessment and training of patients with ACL injury. III.
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-21
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.
NASA Technical Reports Server (NTRS)
Locci, Ivan E.; MacKay, Rebecca A.; Garg, Anita; Ritzert, Frank J.
2004-01-01
An optimized carburization treatment has been developed to mitigate instabilities that form in the microstructures of advanced turbine airfoil materials. Current turbine airfoils consist of a single crystal superalloy base that provides the mechanical performance of the airfoil, a thermal barrier coating (TBC) that reduces the temperature of the base superalloy, and a bondcoat between the superalloy and the TBC, that improves the oxidation and corrosion resistance of the base superalloy and the spallation resistance of the TBC. Advanced nickel-base superalloys containing high levels of refractory metals have been observed to develop an instability called secondary reaction zone (SRZ), which can form beneath diffusion aluminide bondcoats. This instability between the superalloy and the bondcoat has the potential of reducing the mechanical properties of thin-wall turbine airfoils. Controlled gas carburization treatments combined with a prior stress relief heat treatment and adequate surface preparation have been utilized effectively to minimize the formation of SRZ. These additional processing steps are employed before the aluminide bondcoat is deposited and are believed to change the local chemistry and local stresses of the surface of the superalloy. This paper presents the detailed processing steps used to reduce SRZ between platinum aluminide bondcoats and advanced single crystal superalloys.
A Versatile Microfluidic Device for Automating Synthetic Biology.
Shih, Steve C C; Goyal, Garima; Kim, Peter W; Koutsoubelis, Nicolas; Keasling, Jay D; Adams, Paul D; Hillson, Nathan J; Singh, Anup K
2015-10-16
New microbes are being engineered that contain the genetic circuitry, metabolic pathways, and other cellular functions required for a wide range of applications such as producing biofuels, biobased chemicals, and pharmaceuticals. Although currently available tools are useful in improving the synthetic biology process, further improvements in physical automation would help to lower the barrier of entry into this field. We present an innovative microfluidic platform for assembling DNA fragments with 10× lower volumes (compared to that of current microfluidic platforms) and with integrated region-specific temperature control and on-chip transformation. Integration of these steps minimizes the loss of reagents and products compared to that with conventional methods, which require multiple pipetting steps. For assembling DNA fragments, we implemented three commonly used DNA assembly protocols on our microfluidic device: Golden Gate assembly, Gibson assembly, and yeast assembly (i.e., TAR cloning, DNA Assembler). We demonstrate the utility of these methods by assembling two combinatorial libraries of 16 plasmids each. Each DNA plasmid is transformed into Escherichia coli or Saccharomyces cerevisiae using on-chip electroporation and further sequenced to verify the assembly. We anticipate that this platform will enable new research that can integrate this automated microfluidic platform to generate large combinatorial libraries of plasmids and will help to expedite the overall synthetic biology process.
Carvalho, Luis Felipe C. S.; Nogueira, Marcelo Saito; Neto, Lázaro P. M.; Bhattacharjee, Tanmoy T.; Martin, Airton A.
2017-01-01
Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings. PMID:29188115
Carvalho, Luis Felipe C S; Nogueira, Marcelo Saito; Neto, Lázaro P M; Bhattacharjee, Tanmoy T; Martin, Airton A
2017-11-01
Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings.
Alternative Approach to Vehicle Element Processing
NASA Technical Reports Server (NTRS)
Huether, Jacob E.; Otto, Albert E.
1995-01-01
The National Space Transportation Policy (NSTP), describes the challenge facing today's aerospace industry. 'Assuring reliable and affordable access to space through U.S. space transportation capabilities is a fundamental goal of the U.S. space program'. Experience from the Space Shuttle Program (SSP) tells us that launch and mission operations are responsible for approximately 45 % of the cost of each shuttle mission. Reducing these costs is critical to NSTP goals in the next generation launch vehicle. Based on this, an innovative alternative approach to vehicle element processing was developed with an emphasis on reduced launch costs. State-of-the-art upgrades to the launch processing system (LPS) will enhance vehicle ground operations. To carry this one step further, these upgrade could be implemented at various vehicle element manufacturing sites to ensure system compatibility between the manufacturing facility and the launch site. Design center vehicle stand alone testing will ensure system integrity resulting in minimized checkout and testing at the launch site. This paper will addresses vehicle test requirements, timelines and ground checkout procedures which enable concept implementation.
Haemocompatibility assessment of synthesised platinum nanoparticles and its implication in biology.
Shiny, P J; Mukherjee, Amitava; Chandrasekaran, N
2014-06-01
The growing need for advanced treatment of evolving diseases has become a motivation for this study. Among the noble metals, platinum nanoparticles are of importance because of their catalytic property, antioxidant potential, minimal toxicity and diverse applications. Biological process of synthesis has retained its significance, because it is a simple one-step process yielding stable nanoparticles. Herein, we have synthesised platinum nanoparticles through a green process using the unexplored seaweed Padina gymnospora, a brown alga. The course of synthesis was monitored and the nanoparticles were characterised using UV-visible spectroscopy. The synthesised nanoparticles were studied using TEM, XRD and FTIR. The TEM and XRD studies reveal the size of the nanoparticle to be <35 nm. The catalytic nanoparticles were capable of oxidising NADH to NAD(+). The biocompatibility was tested by haemolytic assay for the furtherance of the application of platinum nanoparticles in medicine. This is the first report on the biogenic synthesis of platinum nanoparticles using seaweed.
Donovan, P D; Corvari, V; Burton, M D; Rajagopalan, N
2007-01-01
The purpose of this study was to evaluate the effect of processing and storage on the moisture content of two commercially available, 13-mm lyophilization stoppers designated as low moisture (LM) and high moisture (HM) uptake stoppers. The stopper moisture studies included the effect of steam sterilization time, drying time and temperature, equilibrium moisture content, lyophilization and moisture transfer from stopper to a model-lactose lyophilized cake. Results indicated that both stoppers absorbed significant amounts of moisture during sterilization and that the HM stopper absorbed significantly more water than the LM stopper. LM and HM stoppers required approximately 2 and 8 h drying at 105 degrees C, respectively, to achieve final moisture content of not more than 0.5 mg/stopper. Following drying, stopper moisture levels equilibrated rapidly to ambient storage conditions. The apparent equilibrium moisture level was approximately 7 times higher in the HM versus LM stopper. Freeze-drying had minimal effect on the moisture content of dried stoppers. Finally, moisture transfer from the stopper to the lyophilized product is dependent on the initial stopper water content and storage temperature. To better quantify the ramifications of stopper moisture, projections of moisture uptake over the shelf life of a drug product were calculated based on the product-contact surface area of stoppers. Attention to stopper storage conditions prior to use, in addition to processing steps, are necessary to minimize stability issues especially in low-fill, mass lyophilized products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, Adam S.
2011-05-05
In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less
Typing DNA profiles from previously enhanced fingerprints using direct PCR.
Templeton, Jennifer E L; Taylor, Duncan; Handt, Oliva; Linacre, Adrian
2017-07-01
Fingermarks are a source of human identification both through the ridge patterns and DNA profiling. Typing nuclear STR DNA markers from previously enhanced fingermarks provides an alternative method of utilising the limited fingermark deposit that can be left behind during a criminal act. Dusting with fingerprint powders is a standard method used in classical fingermark enhancement and can affect DNA data. The ability to generate informative DNA profiles from powdered fingerprints using direct PCR swabs was investigated. Direct PCR was used as the opportunity to generate usable DNA profiles after performing any of the standard DNA extraction processes is minimal. Omitting the extraction step will, for many samples, be the key to success if there is limited sample DNA. DNA profiles were generated by direct PCR from 160 fingermarks after treatment with one of the following dactyloscopic fingerprint powders: white hadonite; silver aluminium; HiFi Volcano silk black; or black magnetic fingerprint powder. This was achieved by a combination of an optimised double-swabbing technique and swab media, omission of the extraction step to minimise loss of critical low-template DNA, and additional AmpliTaq Gold ® DNA polymerase to boost the PCR. Ninety eight out of 160 samples (61%) were considered 'up-loadable' to the Australian National Criminal Investigation DNA Database (NCIDD). The method described required a minimum of working steps, equipment and reagents, and was completed within 4h. Direct PCR allows the generation of DNA profiles from enhanced prints without the need to increase PCR cycle numbers beyond manufacturer's recommendations. Particular emphasis was placed on preventing contamination by applying strict protocols and avoiding the use of previously used fingerprint brushes. Based on this extensive survey, the data provided indicate minimal effects of any of these four powders on the chance of obtaining DNA profiles from enhanced fingermarks. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Baucom, Robert M.; Hou, Tan-Hung; Kidder, Paul W.; Reddy, Rakasi M.
1991-01-01
AS-4/polyimidesulfone (PISO2) composite prepreg was utilized for the improved compression molding technology investigation. This improved technique employed molding stops which advantageously facilitate the escape of volatile by-products during the B-stage curing step, and effectively minimize the neutralization of the consolidating pressure by intimate interply fiber-fiber contact within the laminate in the subsequent molding cycle. Without the modifying the resin matrix properties, composite panels with both unidirectional and angled plies with outstanding C-scans and mechanical properties were successfully molded using moderate molding conditions, i.e., 660 F and 500 psi, using this technique. The size of the panels molded were up to 6.00 x 6.00 x 0.07 in. A consolidation theory was proposed for the understanding and advancement of the processing science. Processing parameters such as vacuum, pressure cycle design, prepreg quality, etc. were explored.
NGSANE: a lightweight production informatics framework for high-throughput data analysis.
Buske, Fabian A; French, Hugh J; Smith, Martin A; Clark, Susan J; Bauer, Denis C
2014-05-15
The initial steps in the analysis of next-generation sequencing data can be automated by way of software 'pipelines'. However, individual components depreciate rapidly because of the evolving technology and analysis methods, often rendering entire versions of production informatics pipelines obsolete. Constructing pipelines from Linux bash commands enables the use of hot swappable modular components as opposed to the more rigid program call wrapping by higher level languages, as implemented in comparable published pipelining systems. Here we present Next Generation Sequencing ANalysis for Enterprises (NGSANE), a Linux-based, high-performance-computing-enabled framework that minimizes overhead for set up and processing of new projects, yet maintains full flexibility of custom scripting when processing raw sequence data. Ngsane is implemented in bash and publicly available under BSD (3-Clause) licence via GitHub at https://github.com/BauerLab/ngsane. Denis.Bauer@csiro.au Supplementary data are available at Bioinformatics online.
Choi, Subin; Park, Kyeonghwan; Lee, Seungwook; Lim, Yeongjin; Oh, Byungjoo; Chae, Hee Young; Park, Chan Sam; Shin, Heugjoo; Kim, Jae Joon
2018-03-02
This paper presents a resolution-reconfigurable wide-range resistive sensor readout interface for wireless multi-gas monitoring applications that displays results on a smartphone. Three types of sensing resolutions were selected to minimize processing power consumption, and a dual-mode front-end structure was proposed to support the detection of a variety of hazardous gases with wide range of characteristic resistance. The readout integrated circuit (ROIC) was fabricated in a 0.18 μm CMOS process to provide three reconfigurable data conversions that correspond to a low-power resistance-to-digital converter (RDC), a 12-bit successive approximation register (SAR) analog-to-digital converter (ADC), and a 16-bit delta-sigma modulator. For functional feasibility, a wireless sensor system prototype that included in-house microelectromechanical (MEMS) sensing devices and commercial device products was manufactured and experimentally verified to detect a variety of hazardous gases.
Flow chemistry vs. flow analysis.
Trojanowicz, Marek
2016-01-01
The flow mode of conducting chemical syntheses facilitates chemical processes through the use of on-line analytical monitoring of occurring reactions, the application of solid-supported reagents to minimize downstream processing and computerized control systems to perform multi-step sequences. They are exactly the same attributes as those of flow analysis, which has solid place in modern analytical chemistry in several last decades. The following review paper, based on 131 references to original papers as well as pre-selected reviews, presents basic aspects, selected instrumental achievements and developmental directions of a rapidly growing field of continuous flow chemical synthesis. Interestingly, many of them might be potentially employed in the development of new methods in flow analysis too. In this paper, examples of application of flow analytical measurements for on-line monitoring of flow syntheses have been indicated and perspectives for a wider application of real-time analytical measurements have been discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Diffusion with stochastic resetting at power-law times.
Nagar, Apoorva; Gupta, Shamik
2016-06-01
What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals τ distributed as a power law ∼τ^{-(1+α)};α>0? Modeling the stochastic process by diffusion and the large changes as abrupt resets to the initial condition, we obtain exact closed-form expressions for both static and dynamic quantities, while accounting for strong correlations implied by a power law. Our results show that the resulting dynamics exhibits a spectrum of rich long-time behavior, from an ever-spreading spatial distribution for α<1, to one that is time independent for α>1. The dynamics has strong consequences on the time to reach a distant target for the first time; we specifically show that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.
Accidental Turbulent Discharge Rate Estimation from Videos
NASA Astrophysics Data System (ADS)
Ibarra, Eric; Shaffer, Franklin; Savaş, Ömer
2015-11-01
A technique to estimate the volumetric discharge rate in accidental oil releases using high speed video streams is described. The essence of the method is similar to PIV processing, however the cross correlation is carried out on the visible features of the efflux, which are usually turbulent, opaque and immiscible. The key step in the process is to perform a pixelwise time filtering on the video stream, in which the parameters are commensurate with the scales of the large eddies. The velocity field extracted from the shell of visible features is then used to construct an approximate velocity profile within the discharge. The technique has been tested on laboratory experiments using both water and oil jets at Re ~105 . The technique is accurate to 20%, which is sufficient for initial responders to deploy adequate resources for containment. The software package requires minimal user input and is intended for deployment on an ROV in the field. Supported by DOI via NETL.
Warth, Arne; Muley, Thomas; Meister, Michael; Weichert, Wilko
2015-01-01
Preanalytic sampling techniques and preparation of tissue specimens strongly influence analytical results in lung tissue diagnostics both on the morphological but also on the molecular level. However, in contrast to analytics where tremendous achievements in the last decade have led to a whole new portfolio of test methods, developments in preanalytics have been minimal. This is specifically unfortunate in lung cancer, where usually only small amounts of tissue are at hand and optimization in all processing steps is mandatory in order to increase the diagnostic yield. In the following, we provide a comprehensive overview on some aspects of preanalytics in lung cancer from the method of sampling over tissue processing to its impact on analytical test results. We specifically discuss the role of preanalytics in novel technologies like next-generation sequencing and in the state-of the-art cytology preparations. In addition, we point out specific problems in preanalytics which hamper further developments in the field of lung tissue diagnostics.
Sustainable Life Cycles of Natural-Precursor-Derived Nanocarbons.
Bazaka, Kateryna; Jacob, Mohan V; Ostrikov, Kostya Ken
2016-01-13
Sustainable societal and economic development relies on novel nanotechnologies that offer maximum efficiency at minimal environmental cost. Yet, it is very challenging to apply green chemistry approaches across the entire life cycle of nanotech products, from design and nanomaterial synthesis to utilization and disposal. Recently, novel, efficient methods based on nonequilibrium reactive plasma chemistries that minimize the process steps and dramatically reduce the use of expensive and hazardous reagents have been applied to low-cost natural and waste sources to produce value-added nanomaterials with a wide range of applications. This review discusses the distinctive effects of nonequilibrium reactive chemistries and how these effects can aid and advance the integration of sustainable chemistry into each stage of nanotech product life. Examples of the use of enabling plasma-based technologies in sustainable production and degradation of nanotech products are discussed-from selection of precursors derived from natural resources and their conversion into functional building units, to methods for green synthesis of useful naturally degradable carbon-based nanomaterials, to device operation and eventual disintegration into naturally degradable yet potentially reusable byproducts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-02-01
Pipe Crawler{reg_sign} is a pipe surveying system for performing radiological characterization and/or free release surveys of piping systems. The technology employs a family of manually advanced, wheeled platforms, or crawlers, fitted with one or more arrays of thin Geiger Mueller (GM) detectors operated from an external power supply and data processing unit. Survey readings are taken in a step-wise fashion. A video camera and tape recording system are used for video surveys of pipe interiors prior to and during radiological surveys. Pipe Crawler{reg_sign} has potential advantages over the baseline and other technologies in areas of cost, durability, waste minimization, andmore » intrusiveness. Advantages include potentially reduced cost, potential reuse of the pipe system, reduced waste volume, and the ability to manage pipes in place with minimal disturbance to facility operations. Advantages over competing technologies include potentially reduced costs and the ability to perform beta-gamma surveys that are capable of passing regulatory scrutiny for free release of piping systems.« less
An integrated and pragmatic approach: Global plant safety management
NASA Astrophysics Data System (ADS)
McNutt, Jack; Gross, Andrew
1989-05-01
The Bhopal disaster in India in 1984 has compelled manufacturing companies to review their operations in order to minimize their risk exposure. Much study has been done on the subject of risk assessment and in refining safety reviews of plant operations. However, little work has been done to address the broader needs of decision makers in the multinational environment. The corporate headquarters of multinational organizations are concerned with identifying vulnerable areas to assure that appropriate risk-minimization measures are in force or will be taken. But the task of screening global business units for safety prowess is complicated and time consuming. This article takes a step towards simplifying this process by presenting the decisional model developed by the authors. Beginning with an overview of key issues affecting global safety management, the focus shifts to the multinational vulnerability model developed by the authors, which reflects an integration of approaches. The article concludes with a discussion of areas for further research. While the global chemical industry and major incidents therein are used for illustration, the procedures and solutions suggested here are applicable to all manufacturing operations.
Elastic Free Energy Drives the Shape of Prevascular Solid Tumors
Mills, K. L.; Kemkemer, Ralf; Rudraraju, Shiva; Garikipati, Krishna
2014-01-01
It is well established that the mechanical environment influences cell functions in health and disease. Here, we address how the mechanical environment influences tumor growth, in particular, the shape of solid tumors. In an in vitro tumor model, which isolates mechanical interactions between cancer tumor cells and a hydrogel, we find that tumors grow as ellipsoids, resembling the same, oft-reported observation of in vivo tumors. Specifically, an oblate ellipsoidal tumor shape robustly occurs when the tumors grow in hydrogels that are stiffer than the tumors, but when they grow in more compliant hydrogels they remain closer to spherical in shape. Using large scale, nonlinear elasticity computations we show that the oblate ellipsoidal shape minimizes the elastic free energy of the tumor-hydrogel system. Having eliminated a number of other candidate explanations, we hypothesize that minimization of the elastic free energy is the reason for predominance of the experimentally observed ellipsoidal shape. This result may hold significance for explaining the shape progression of early solid tumors in vivo and is an important step in understanding the processes underlying solid tumor growth. PMID:25072702
Translating Big Data into Smart Data for Veterinary Epidemiology
VanderWaal, Kimberly; Morrison, Robert B.; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M.
2017-01-01
The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing “big” data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having “big data” to create “smart data,” with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues. PMID:28770216
Smart Push, Smart Pull, Sensor to Shooter in a Multi-Level Secure/Safe (MLS) Infrastructure
2006-05-04
policy violation with respect to: Security Safety Financial Posture Infrastructure The IATF identifies five levels: V1: Negligible effect V2: Minimal...MLS) Infrastructure Step 2: Determine Threat Levels Best practices also in the IATF Threats are ranked by assessing: Capability Resources Motivation...Risk Willingness The IATF identifies seven levels: T1: Inadvertent or accidental events Tripping over a power cord T2: Minimal resources – willing to
An Approach for the Distance Delivery of AFIT/LS Resident Degree Curricula
1991-12-01
minimal (least complex) distance education technologies appropriate for each learning topic or task. This may be the most time-consuming step in the...34 represents the least complex distance education technology that could be used to deliver the educational material for a particular learning objective. Careful...minimal technology needed to accomplish the learning objective. Look at question Q2.1 (Figure 5.15). Since the lecture offers an essential educational
X-Ray Computed Tomography: The First Step in Mars Sample Return Processing
NASA Technical Reports Server (NTRS)
Welzenbach, L. C.; Fries, M. D.; Grady, M. M.; Greenwood, R. C.; McCubbin, F. M.; Zeigler, R. A.; Smith, C. L.; Steele, A.
2017-01-01
The Mars 2020 rover mission will collect and cache samples from the martian surface for possible retrieval and subsequent return to Earth. If the samples are returned, that mission would likely present an opportunity to analyze returned Mars samples within a geologic context on Mars. In addition, it may provide definitive information about the existence of past or present life on Mars. Mars sample return presents unique challenges for the collection, containment, transport, curation and processing of samples [1] Foremost in the processing of returned samples are the closely paired considerations of life detection and Planetary Protection. In order to achieve Mars Sample Return (MSR) science goals, reliable analyses will depend on overcoming some challenging signal/noise-related issues where sparse martian organic compounds must be reliably analyzed against the contamination background. While reliable analyses will depend on initial clean acquisition and robust documentation of all aspects of developing and managing the cache [2], there needs to be a reliable sample handling and analysis procedure that accounts for a variety of materials which may or may not contain evidence of past or present martian life. A recent report [3] suggests that a defined set of measurements should be made to effectively inform both science and Planetary Protection, when applied in the context of the two competing null hypotheses: 1) that there is no detectable life in the samples; or 2) that there is martian life in the samples. The defined measurements would include a phased approach that would be accepted by the community to preserve the bulk of the material, but provide unambiguous science data that can be used and interpreted by various disciplines. Fore-most is the concern that the initial steps would ensure the pristine nature of the samples. Preliminary, non-invasive techniques such as computed X-ray tomography (XCT) have been suggested as the first method to interrogate and characterize the cached samples without altering the materials [1,2]. A recent report [4] indicates that XCT may minimally alter samples for some techniques, and work is needed to quantify these effects, maximizing science return from XCT initial analysis while minimizing effects.
To repair or not to repair: with FAVOR there is no question
NASA Astrophysics Data System (ADS)
Garetto, Anthony; Schulz, Kristian; Tabbone, Gilles; Himmelhaus, Michael; Scheruebl, Thomas
2016-10-01
In the mask shop the challenges associated with today's advanced technology nodes, both technical and economic, are becoming increasingly difficult. The constant drive to continue shrinking features means more masks per device, smaller manufacturing tolerances and more complexity along the manufacturing line with respect to the number of manufacturing steps required. Furthermore, the extremely competitive nature of the industry makes it critical for mask shops to optimize asset utilization and processes in order to maximize their competitive advantage and, in the end, profitability. Full maximization of profitability in such a complex and technologically sophisticated environment simply cannot be achieved without the use of smart automation. Smart automation allows productivity to be maximized through better asset utilization and process optimization. Reliability is improved through the minimization of manual interactions leading to fewer human error contributions and a more efficient manufacturing line. In addition to these improvements in productivity and reliability, extra value can be added through the collection and cross-verification of data from multiple sources which provides more information about our products and processes. When it comes to handling mask defects, for instance, the process consists largely of time consuming manual interactions that are error prone and often require quick decisions from operators and engineers who are under pressure. The handling of defects itself is a multiple step process consisting of several iterations of inspection, disposition, repair, review and cleaning steps. Smaller manufacturing tolerances and features with higher complexity contribute to a higher number of defects which must be handled as well as a higher level of complexity. In this paper the recent efforts undertaken by ZEISS to provide solutions which address these challenges, particularly those associated with defectivity, will be presented. From automation of aerial image analysis to the use of data driven decision making to predict and propose the optimized back end of line process flow, productivity and reliability improvements are targeted by smart automation. Additionally the generation of the ideal aerial image from the design and several repair enhancement features offer additional capabilities to improve the efficiency and yield associated with defect handling.
Analysis of Water Recovery Rate from the Heat Melt Compactor
NASA Technical Reports Server (NTRS)
Balasubramaniam, R.; Hegde, U.; Gokoglu, S.
2013-01-01
Human space missions generate trash with a substantial amount of plastic (20% or greater by mass). The trash also contains water trapped in food residue and paper products and other trash items. The Heat Melt Compactor (HMC) under development by NASA Ames Research Center (ARC) compresses the waste, dries it to recover water and melts the plastic to encapsulate the compressed trash. The resulting waste disk or puck represents an approximately ten-fold reduction in the volume of the initial trash loaded into the HMC. In the current design concept being pursued, the trash is compressed by a piston after it is loaded into the trash chamber. The piston face, the side walls of the waste processing chamber and the end surface in contact with the waste can be heated to evaporate the water and to melt the plastic. Water is recovered by the HMC in two phases. The first is a pre-process compaction without heat or with the heaters initially turned on but before the waste heats up. Tests have shown that during this step some liquid water may be expelled from the chamber. This water is believed to be free water (i.e., not bound with or absorbed in other waste constituents) that is present in the trash. This phase is herein termed Phase A of the water recovery process. During HMC operations, it is desired that liquid water recovery in Phase A be eliminated or minimized so that water-vapor processing equipment (e.g., condensers) downstream of the HMC are not fouled by liquid water and its constituents (i.e., suspended or dissolved matter) exiting the HMC. The primary water recovery process takes place next where the trash is further compacted while the heated surfaces reach their set temperatures for this step. This step will be referred to herein as Phase B of the water recovery process. During this step the waste chamber may be exposed to different selected pressures such as ambient, low pressure (e.g., 0.2 atm), or vacuum. The objective for this step is to remove both bound and any remaining free water in the trash by evaporation. The temperature settings of the heated surfaces are usually kept above the saturation temperature of water but below the melting temperature of the plastic in the waste during this step to avoid any encapsulation of wet trash which would reduce the amount of recovered water by blocking the vapor escape. In this paper, we analyze the water recovery rate during Phase B where the trash is heated and water leaves the waste chamber as vapor, for operation of the HMC in reduced gravity. We pursue a quasi-one-dimensional model with and without sidewall heating to determine the water recovery rate and the trash drying time. The influences of the trash thermal properties, the amount of water loading, and the distribution of the water in the trash on the water recovery rates are determined.
The robotic Whipple: operative strategy and technical considerations.
MacKenzie, Shawn; Kosari, Kambiz; Sielaff, Timothy; Johnson, Eric
2011-03-01
Advances in robotic surgery have allowed the frontiers of minimally invasive pancreatic surgery to expand. We present a step-by-step approach to the robotic Whipple procedure. The discussion includes port setting and robotic docking, kocherization and superior mesenteric vein identification, portal dissection, releasing the ligament of Treitz, uncinate dissection, and reconstruction. A brief report of our initial 2-year experience with the robotic Whipple procedure is also presented.
A simplified method to recover urinary vesicles for clinical applications, and sample banking.
Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry
2014-12-23
Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking.
Silver nano fabrication using leaf disc of Passiflora foetida Linn
NASA Astrophysics Data System (ADS)
Lade, Bipin D.; Patil, Anita S.
2017-06-01
The main purpose of the experiment is to develop a greener low cost SNP fabrication steps using factories of secondary metabolites from Passiflora leaf extract. Here, the leaf extraction process is omitted, and instead a leaf disc was used for stable SNP fabricated by optimizing parameters such as a circular leaf disc of 2 cm (1, 2, 3, 4, 5) instead of leaf extract and grade of pH (7, 8, 9, 11). The SNP synthesis reaction is tried under room temperature, sun, UV and dark condition. The leaf disc preparation steps are also discussed in details. The SNP obtained using (1 mM: 100 ml AgNO3+ singular leaf disc: pH 9, 11) is applied against featured room temperature and sun condition. The UV spectroscopic analysis confirms that sun rays synthesized SNP yields stable nano particles. The FTIR analysis confirms a large number of functional groups such as alkanes, alkyne, amines, aliphatic amine, carboxylic acid; nitro-compound, alcohol, saturated aldehyde and phenols involved in reduction of silver salt to zero valent ions. The leaf disc mediated synthesis of silver nanoparticles, minimizes leaf extract preparation step and eligible for stable SNP synthesis. The methods sun and room temperature based nano particles synthesized within 10 min would be use certainly for antimicrobial activity.
A Simplified Method to Recover Urinary Vesicles for Clinical Applications, and Sample Banking
Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry
2014-01-01
Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking. PMID:25532487
Automatic detection of snow avalanches in continuous seismic data using hidden Markov models
NASA Astrophysics Data System (ADS)
Heck, Matthias; Hammer, Conny; van Herwijnen, Alec; Schweizer, Jürg; Fäh, Donat
2018-01-01
Snow avalanches generate seismic signals as many other mass movements. Detection of avalanches by seismic monitoring is highly relevant to assess avalanche danger. In contrast to other seismic events, signals generated by avalanches do not have a characteristic first arrival nor is it possible to detect different wave phases. In addition, the moving source character of avalanches increases the intricacy of the signals. Although it is possible to visually detect seismic signals produced by avalanches, reliable automatic detection methods for all types of avalanches do not exist yet. We therefore evaluate whether hidden Markov models (HMMs) are suitable for the automatic detection of avalanches in continuous seismic data. We analyzed data recorded during the winter season 2010 by a seismic array deployed in an avalanche starting zone above Davos, Switzerland. We re-evaluated a reference catalogue containing 385 events by grouping the events in seven probability classes. Since most of the data consist of noise, we first applied a simple amplitude threshold to reduce the amount of data. As first classification results were unsatisfying, we analyzed the temporal behavior of the seismic signals for the whole data set and found that there is a high variability in the seismic signals. We therefore applied further post-processing steps to reduce the number of false alarms by defining a minimal duration for the detected event, implementing a voting-based approach and analyzing the coherence of the detected events. We obtained the best classification results for events detected by at least five sensors and with a minimal duration of 12 s. These processing steps allowed identifying two periods of high avalanche activity, suggesting that HMMs are suitable for the automatic detection of avalanches in seismic data. However, our results also showed that more sensitive sensors and more appropriate sensor locations are needed to improve the signal-to-noise ratio of the signals and therefore the classification.
Rapid fabrication of microneedles using magnetorheological drawing lithography.
Chen, Zhipeng; Ren, Lei; Li, Jiyu; Yao, Lebin; Chen, Yan; Liu, Bin; Jiang, Lelun
2018-01-01
Microneedles are micron-sized needles that are widely applied in biomedical fields owing to their painless, minimally invasive, and convenient operation. However, most microneedle fabrication approaches are costly, time consuming, involve multiple steps, and require expensive equipment. In this study, we present a novel magnetorheological drawing lithography (MRDL) method to efficiently fabricate microneedle, bio-inspired microneedle, and molding-free microneedle array. With the assistance of an external magnetic field, the 3D structure of a microneedle can be directly drawn from a droplet of curable magnetorheological fluid. The formation process of a microneedle consists of two key stages, elasto-capillary self-thinning and magneto-capillary self-shrinking, which greatly affect the microneedle height and tip radius. Penetration and fracture tests demonstrated that the microneedle had sufficient strength and toughness for skin penetration. Microneedle arrays and a bio-inspired microneedle were also fabricated, which further demonstrated the versatility and flexibility of the MRDL method. Microneedles have been widely applied in biomedical fields owing to their painless, minimally invasive, and convenient operation. However, most microneedle fabrication approaches are costly, time consuming, involve multiple steps, and require expensive equipment. Furthermore, most researchers have focused on the biomedical applications of microneedles but have given little attention to the optimization of the fabrication process. This research presents a novel magnetorheological drawing lithography (MRDL) method to fabricate microneedle, bio-inspired microneedle, and molding-free microneedle array. In this proposed technique, a droplet of curable magnetorheological fluid (CMRF) is drawn directly from almost any substrate to produce a 3D microneedle under an external magnetic field. This method not only inherits the advantages of thermal drawing approach without the need for a mask and light irradiation but also eliminates the requirement for drawing temperature adjustment. The MRDL method is extremely simple and can even produce the complex and multiscale structure of bio-inspired microneedle. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
Line roughness improvements on self-aligned quadruple patterning by wafer stress engineering
NASA Astrophysics Data System (ADS)
Liu, Eric; Ko, Akiteru; Biolsi, Peter; Chae, Soo Doo; Hsieh, Chia-Yun; Kagaya, Munehito; Lee, Choongman; Moriya, Tsuyoshi; Tsujikawa, Shimpei; Suzuki, Yusuke; Okubo, Kazuya; Imai, Kiyotaka
2018-04-01
In integrated circuit and memory devices, size shrinkage has been the most effective method to reduce production cost and enable the steady increment of the number of transistors per unit area over the past few decades. In order to reduce the die size and feature size, it is necessary to minimize pattern formation in the advance node development. In the node of sub-10nm, extreme ultra violet lithography (EUV) and multi-patterning solutions based on 193nm immersionlithography are the two most common options to achieve the size requirement. In such small features of line and space pattern, line width roughness (LWR) and line edge roughness (LER) contribute significant amount of process variation that impacts both physical and electrical performances. In this paper, we focus on optimizing the line roughness performance by using wafer stress engineering on 30nm pitch line and space pattern. This pattern is generated by a self-aligned quadruple patterning (SAQP) technique for the potential application of fin formation. Our investigation starts by comparing film materials and stress levels in various processing steps and material selection on SAQP integration scheme. From the cross-matrix comparison, we are able to determine the best stack of film selection and stress combination in order to achieve the lowest line roughness performance while obtaining pattern validity after fin etch. This stack is also used to study the step-by-step line roughness performance from SAQP to fin etch. Finally, we will show a successful patterning of 30nm pitch line and space pattern SAQP scheme with 1nm line roughness performance.
Synergetic effect of double-step blocking layer for the perovskite solar cell
NASA Astrophysics Data System (ADS)
Kim, Jinhyun; Hwang, Taehyun; Lee, Sangheon; Lee, Byungho; Kim, Jaewon; Kim, Jaewook; Gil, Bumjin; Park, Byungwoo
2017-10-01
In an organometallic CH3NH3PbI3 (MAPbI3) perovskite solar cell, we have demonstrated a vastly compact TiO2 layer synthesized by double-step deposition, through a combination of sputter and solution deposition to minimize the electron-hole recombination and boost the power conversion efficiency. As a result, the double-step strategy allowed outstanding transmittance of blocking layer. Additionally, crystallinity and morphology of the perovskite film were significantly modified, provoking enhanced photon absorption and solar cell performance with the reduced recombination rate. Thereby, this straightforward double-step strategy for the blocking layer exhibited 12.31% conversion efficiency through morphological improvements of each layer.
Model-based review of Doppler global velocimetry techniques with laser frequency modulation
NASA Astrophysics Data System (ADS)
Fischer, Andreas
2017-06-01
Optical measurements of flow velocity fields are of crucial importance to understand the behavior of complex flow. One flow field measurement technique is Doppler global velocimetry (DGV). A large variety of different DGV approaches exist, e.g., applying different kinds of laser frequency modulation. In order to investigate the measurement capabilities especially of the newer DGV approaches with laser frequency modulation, a model-based review of all DGV measurement principles is performed. The DGV principles can be categorized by the respective number of required time steps. The systematic review of all DGV principle reveals drawbacks and benefits of the different measurement approaches with respect to the temporal resolution, the spatial resolution and the measurement range. Furthermore, the Cramér-Rao bound for photon shot is calculated and discussed, which represents a fundamental limit of the achievable measurement uncertainty. As a result, all DGV techniques provide similar minimal uncertainty limits. With Nphotons as the number of scattered photons, the minimal standard deviation of the flow velocity reads about 106 m / s /√{Nphotons } , which was calculated for a perpendicular arrangement of the illumination and observation direction and a laser wavelength of 895 nm. As a further result, the signal processing efficiencies are determined with a Monte-Carlo simulation. Except for the newest correlation-based DGV method, the signal processing algorithms are already optimal or near the optimum. Finally, the different DGV approaches are compared regarding errors due to temporal variations of the scattered light intensity and the flow velocity. The influence of a linear variation of the scattered light intensity can be reduced by maximizing the number of time steps, because this means to acquire more information for the correction of this systematic effect. However, more time steps can result in a flow velocity measurement with a lower temporal resolution, when operating at the maximal frame rate of the camera. DGV without laser frequency modulation then provides the highest temporal resolutions and is not sensitive with respect to temporal variations but with respect to spatial variations of the scattered light intensity. In contrast to this, all DGV variants suffer from velocity variations during the measurement. In summary, the experimental conditions and the measurement task finally decide about the ideal choice from the reviewed DGV methods.
Absolute Paleointensity Techniques: Developments in the Last 10 Years (Invited)
NASA Astrophysics Data System (ADS)
Bowles, J. A.; Brown, M. C.
2009-12-01
The ability to determine variations in absolute intensity of the Earth’s paleomagnetic field has greatly enhanced our understanding of geodynamo processes, including secular variation and field reversals. Igneous rocks and baked clay artifacts that carry a thermal remanence (TRM) have allowed us to study field variations over timescales ranging from decades to billions of years. All absolute paleointensity techniques are fundamentally based on repeating the natural process by which the sample acquired its magnetization, i.e. a laboratory TRM is acquired in a controlled field, and the ratio of the natural TRM to that acquired in the laboratory is directly proportional to the ancient field. Techniques for recovering paleointensity have evolved since the 1930s from relatively unsophisticated (but revolutionary for their time) single step remagnetizations to the various complicated, multi-step procedures in use today. These procedures can be broadly grouped into two categories: 1) “Thellier-type” experiments that step-wise heat samples at a series of temperatures up to the maximum unblocking temperature of the sample, progressively removing the natural remanence (NRM) and acquiring a laboratory-induced TRM; and 2) “Shaw-type” experiments that combine alternating field demagnetization of the NRM and laboratory TRM with a single heating to a temperature above the sample’s Curie temperature, acquiring a total TRM in one step. Many modifications to these techniques have been developed over the years with the goal of identifying and/or accommodating non-ideal behavior, such as alteration and multi-domain (MD) remanence, which may lead to inaccurate paleofield estimates. From a technological standpoint, perhaps the most significant development in the last decade is the use of microwave (de)magnetization in both Thellier-type and Shaw-type experiments. By using microwaves to directly generate spin waves within the magnetic grains (rather than using phonons generated by heating, which then exchange energy with the magnetic system), a TRM can be acquired with minimal heating of the bulk sample, thus potentially minimizing sample alteration. The theory of TRM acquisition is best developed for single-domain (SD) grains, and most paleointensity techniques are predicated on the assumption that the remanence is carried predominantly by SD material. Because the vast majority of geological materials are characterized by a larger magnetic grain size, efforts to expand paleointensity studies over the past decade have focused on developing TRM theories and paleointensity methods for pseudo-single-domain (PSD) and MD samples. Other workers have been exploring the potential of SD materials that were not traditionally used in paleointensity studies, such as ash flow tuffs, submarine basaltic glass, and single silicate crystals with magnetite inclusions. The latter has the potential to shed light on early Earth processes, given that the fine-grained inclusions may be resistant to alteration over long time scales. We will review the major paleointensity techniques in use today, with special attention paid to the advantages and disadvantages of each. Techniques will be illustrated with examples highlighting new paleointensity applications to geologic processes at a variety of timescales.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
Previsani, Nicoletta; Tangermann, Rudolph H; Tallis, Graham; Jafari, Hamid S
2015-08-28
In 1988, the World Health Assembly of the World Health Organization (WHO) resolved to eradicate polio worldwide. Among the three wild poliovirus (WPV) types (type 1, type 2, and type 3), WPV type 2 (WPV2) has been eliminated in the wild since 1999, and WPV type 3 (WPV3) has not been reported since 2012. In 2015, only Afghanistan and Pakistan have reported WPV transmission. On May 25, 2015, all WHO Member States endorsed World Health Assembly resolution 68.3 on full implementation of the Polio Eradication and Endgame Strategic Plan 2013-2018 (the Endgame Plan), and with it, the third Global Action Plan to minimize poliovirus facility-associated risk (GAPIII). All WHO Member States have committed to implementing appropriate containment of WPV2 in essential laboratory and vaccine production facilities* by the end of 2015 and of type 2 oral poliovirus vaccine (OPV2) within 3 months of global withdrawal of OPV2, which is planned for April 2016. This report summarizes critical steps for essential laboratory and vaccine production facilities that intend to retain materials confirmed to contain or potentially containing type-specific WPV, vaccine-derived poliovirus (VDPV), or OPV/Sabin viruses, and steps for nonessential facilities† that process specimens that contain or might contain polioviruses. National authorities will need to certify that the essential facilities they host meet the containment requirements described in GAPIII. After certification of WPV eradication, the use of all OPV will cease; final containment of all polioviruses after polio eradication and OPV cessation will minimize the risk for reintroduction of poliovirus into a polio-free world.
Naser, Mohamed A.; Patterson, Michael S.
2011-01-01
Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647
Seismic data interpolation and denoising by learning a tensor tight frame
NASA Astrophysics Data System (ADS)
Liu, Lina; Plonka, Gerlind; Ma, Jianwei
2017-10-01
Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.
Liu, Benmei; Yu, Mandi; Graubard, Barry I; Troiano, Richard P; Schenker, Nathaniel
2016-01-01
The Physical Activity Monitor (PAM) component was introduced into the 2003-2004 National Health and Nutrition Examination Survey (NHANES) to collect objective information on physical activity including both movement intensity counts and ambulatory steps. Due to an error in the accelerometer device initialization process, the steps data were missing for all participants in several primary sampling units (PSUs), typically a single county or group of contiguous counties, who had intensity count data from their accelerometers. To avoid potential bias and loss in efficiency in estimation and inference involving the steps data, we considered methods to accurately impute the missing values for steps collected in the 2003-2004 NHANES. The objective was to come up with an efficient imputation method which minimized model-based assumptions. We adopted a multiple imputation approach based on Additive Regression, Bootstrapping and Predictive mean matching (ARBP) methods. This method fits alternative conditional expectation (ace) models, which use an automated procedure to estimate optimal transformations for both the predictor and response variables. This paper describes the approaches used in this imputation and evaluates the methods by comparing the distributions of the original and the imputed data. A simulation study using the observed data is also conducted as part of the model diagnostics. Finally some real data analyses are performed to compare the before and after imputation results. PMID:27488606
Low activation steels welding with PWHT and coating for ITER test blanket modules and DEMO
NASA Astrophysics Data System (ADS)
Aubert, P.; Tavassoli, F.; Rieth, M.; Diegele, E.; Poitevin, Y.
2011-02-01
EUROFER weldability is investigated in support of the European material properties database and TBM manufacturing. Electron Beam, Hybrid, laser and narrow gap TIG processes have been carried out on the EUROFER-97 steel (thickness up to 40 mm), a reduced activation ferritic-martensitic steel developed in Europe. These welding processes produce similar welding results with high joint coefficients and are well adapted for minimizing residual distortions. The fusion zones are typically composed of martensite laths, with small grain sizes. In the heat-affected zones, martensite grains contain carbide precipitates. High hardness values are measured in all these zones that if not tempered would degrade toughness and creep resistance. PWHT developments have driven to a one-step PWHT (750 °C/3 h), successfully applied to joints restoring good material performances. It will produce less distortion levels than a full austenitization PWHT process, not really applicable to a complex welded structure such as the TBM. Different tungsten coatings have been successfully processed on EUROFER material. It has shown no really effect on the EUROFER base material microstructure.
DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR BENCH-SCALE REFORMER TREATABILITY STUDIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
BANNING DL
2011-02-11
This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.« less
Guidelines for performing systematic reviews in the development of toxicity factors.
Schaefer, Heather R; Myers, Jessica L
2017-12-01
The Texas Commission on Environmental Quality (TCEQ) developed guidance on conducting systematic reviews during the development of chemical-specific toxicity factors. Using elements from publicly available frameworks, the TCEQ systematic review process was developed in order to supplement the existing TCEQ Guidelines for developing toxicity factors (TCEQ Regulatory Guidance 442). The TCEQ systematic review process includes six steps: 1) Problem Formulation; 2) Systematic Literature Review and Study Selection; 3) Data Extraction; 4) Study Quality and Risk of Bias Assessment; 5) Evidence Integration and Endpoint Determination; and 6) Confidence Rating. This document provides guidance on conducting a systematic literature review and integrating evidence from different data streams when developing chemical-specific reference values (ReVs) and unit risk factors (URFs). However, this process can also be modified or expanded to address other questions that would benefit from systematic review practices. The systematic review and evidence integration framework can improve regulatory decision-making processes, increase transparency, minimize bias, improve consistency between different risk assessments, and further improve confidence in toxicity factor development. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Evaluation of low-dose irradiation on microbiological quality of white carrots and string beans
NASA Astrophysics Data System (ADS)
Koike, Amanda C. R.; Santillo, Amanda G.; Rodrigues, Flávio T.; Duarte, Renato C.; Villavicencio, Anna Lucia C. H.
2012-08-01
The minimally processed food provided the consumer with a product quality, safety and practicality. However, minimal processing of food does not reduce pathogenic population of microorganisms to safe levels. Ionizing radiation used in low doses is effective to maintain the quality of food, reducing the microbiological load but rather compromising the nutritional values and sensory property. The association of minimal processing with irradiation could improve the quality and safety of product. The purpose of this study was to evaluate the effectiveness of low-doses of ionizing radiation on the reduction of microorganisms in minimally processed foods. The results show that the ionizing radiation of minimally processed vegetables could decontaminate them without several changes in its properties.
Efficient data communication protocols for wireless networks
NASA Astrophysics Data System (ADS)
Zeydan, Engin
In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.
Xue, Runmiao; Donovan, Ariel; Zhang, Haiting; Ma, Yinfa; Adams, Craig; Yang, John; Hua, Bin; Inniss, Enos; Eichholz, Todd; Shi, Honglan
2018-02-01
When adding sufficient chlorine to achieve breakpoint chlorination to source water containing high concentration of ammonia during drinking water treatment, high concentrations of disinfection by-products (DBPs) may form. If N-nitrosamine precursors are present, highly toxic N-nitrosamines, primarily N-nitrosodimethylamine (NDMA), may also form. Removing their precursors before disinfection should be a more effective way to minimize these DBPs formation. In this study, zeolites and activated carbon were examined for ammonia and N-nitrosamine precursor removal when incorporated into drinking water treatment processes. The test results indicate that Mordenite zeolite can remove ammonia and five of seven N-nitrosamine precursors efficiently by single step adsorption test. The practical applicability was evaluated by simulation of typical drinking water treatment processes using six-gang stirring system. The Mordenite zeolite was applied at the steps of lime softening, alum coagulation, and alum coagulation with powdered activated carbon (PAC) sorption. While the lime softening process resulted in poor zeolite performance, alum coagulation did not impact ammonia and N-nitrosamine precursor removal. During alum coagulation, more than 67% ammonia and 70%-100% N-nitrosamine precursors were removed by Mordenite zeolite (except 3-(dimethylaminomethyl)indole (DMAI) and 4-dimethylaminoantipyrine (DMAP)). PAC effectively removed DMAI and DMAP when added during alum coagulation. A combination of the zeolite and PAC selected efficiently removed ammonia and all tested seven N-nitrosamine precursors (dimethylamine (DMA), ethylmethylamine (EMA), diethylamine (DEA), dipropylamine (DPA), trimethylamine (TMA), DMAP, and DMAI) during the alum coagulation process. Copyright © 2017. Published by Elsevier B.V.
Documet, Jorge; Le, Anh; Liu, Brent; Chiu, John; Huang, HK
2009-01-01
Purpose This paper presents the concept of bridging the gap between diagnostic images and image-assisted surgical treatment through the development of a one-stop multimedia electronic patient record (ePR) system that manages and distributes the real-time multimodality imaging and informatics data that assists the surgeon during all clinical phases of the operation from planning Intra-Op to post-care follow-up. We present the concept of this multimedia ePR for surgery by first focusing on Image-Assisted Minimally Invasive Spinal Surgery as a clinical application. Methods Three clinical Phases of Minimally Invasive Spinal Surgery workflow in Pre-Op, Intra-Op, and Post Op are discussed. The ePR architecture was developed based on the three-phased workflow, which includes the Pre-Op, Intra-Op, and Post-Op modules and four components comprising of the input integration unit, fault-tolerant gateway server, fault-tolerant ePR server, and the visualization and display. A prototype was built and deployed to a Minimally Invasive Spinal Surgery clinical site with user training and support for daily use. Summary A step-by step approach was introduced to develop a multi-media ePR system for Imaging-Assisted Minimally Invasive Spinal Surgery that includes images, clinical forms, waveforms, and textual data for planning the surgery, two real-time imaging techniques (digital fluoroscopic, DF) and endoscope video images (Endo), and more than half a dozen live vital signs of the patient during surgery. Clinical implementation experiences and challenges were also discussed. PMID:20033507
Multiwavelength digital holography for polishing tool shape measurement
NASA Astrophysics Data System (ADS)
Lédl, Vít.; Psota, Pavel; Václavík, Jan; Doleček, Roman; Vojtíšek, Petr
2013-09-01
Classical mechano-chemical polishing is still a valuable technique, which gives unbeatable results for some types of optical surfaces. For example, optics for high power lasers requires minimized subsurface damage, very high cosmetic quality, and low mid spatial frequency error. One can hardly achieve this with use of subaperture polishing. The shape of the polishing tool plays a crucial role in achieving the required form of the optical surface. Often the shape of the polishing tool or pad is not known precisely enough during the manufacturing process. The tool shape is usually premachined and later is changed during the polishing procedure. An experienced worker could estimate the shape of the tool indirectly from the shape of the polished element, and that is why he can achieve the required shape in few reasonably long iterative steps. Therefore the lack of the exact tool shape knowledge is tolerated. Sometimes, this indirect method is not feasible even if small parts are considered. Moreover, if processes on machines like planetary (continuous) polishers are considered, the incorrect shape of the polishing pad could extend the polishing times extremely. Every iteration step takes hours. Even worse, polished piece could be wasted if the pad has a poor shape. The ability of the tool shape determination would be very valuable in those types of lengthy processes. It was our primary motivation to develop a contactless measurement method for large diffusive surfaces and demonstrate its usability. The proposed method is based on application of multiwavelength digital holographic interferometry with phase shift.
Proposal of a sequential treatment methodology for the safe reuse of oil sludge-contaminated soil.
Mater, L; Sperb, R M; Madureira, L A S; Rosin, A P; Correa, A X R; Radetski, C M
2006-08-25
In this study sequential steps were used to treat and immobilize oil constituents of an oil sludge-contaminated soil. Initially, the contaminated soil was oxidized by a Fenton type reaction (13 wt% for H(2)O(2); 10mM for Fe(2+)). The oxidative treatment period of 80 h was carried out under three different pH conditions: 20 h at pH 6.5, 20 h at pH 4.5, and 40 h at pH 3.0. The oxidized contaminated sample (3 kg) was stabilized and solidified for 2h with clay (1 kg) and lime (2 kg). Finally, this mixture was solidified by sand (2 kg) and Portland cement (4 kg). In order to evaluate the efficiency of different processes to treat and immobilize oil contaminants of the oil sludge-contaminated soil, leachability and solubility tests were performed and extracts were analyzed according to the current Brazilian waste regulations. Results showed that the Fenton oxidative process was partially efficient in degrading the oil contaminants in the soil, since residual concentrations were found for the PAH and BTEX compounds. Leachability tests showed that clay-lime stabilization/solidification followed by Portland cement stabilization/solidification was efficient in immobilizing the recalcitrant and hazardous constituents of the contaminated soil. These two steps stabilization/solidification processes are necessary to enhance environmental protection (minimal leachability) and to render final product economically profitable. The treated waste is safe enough to be used on environmental applications, like roadbeds blocks.
Baronsky-Probst, J; Möltgen, C-V; Kessler, W; Kessler, R W
2016-05-25
Hot melt extrusion (HME) is a well-known process within the plastic and food industries that has been utilized for the past several decades and is increasingly accepted by the pharmaceutical industry for continuous manufacturing. For tamper-resistant formulations of e.g. opioids, HME is the most efficient production technique. The focus of this study is thus to evaluate the manufacturability of the HME process for tamper-resistant formulations. Parameters such as the specific mechanical energy (SME), as well as the melt pressure and its standard deviation, are important and will be discussed in this study. In the first step, the existing process data are analyzed by means of multivariate data analysis. Key critical process parameters such as feed rate, screw speed, and the concentration of the API in the polymers are identified, and critical quality parameters of the tablet are defined. In the second step, a relationship between the critical material, product and process quality attributes are established by means of Design of Experiments (DoEs). The resulting SME and the temperature at the die are essential data points needed to indirectly qualify the degradation of the API, which should be minimal. NIR-spectroscopy is used to monitor the material during the extrusion process. In contrast to most applications in which the probe is directly integrated into the die, the optical sensor is integrated into the cooling line of the strands. This saves costs in the probe design and maintenance and increases the robustness of the chemometric models. Finally, a process measurement system is installed to monitor and control all of the critical attributes in real-time by means of first principles, DoE models, soft sensor models, and spectroscopic information. Overall, the process is very robust as long as the screw speed is kept low. Copyright © 2015 Elsevier B.V. All rights reserved.
Yu, Iris K M; Tsang, Daniel C W; Yip, Alex C K; Chen, Season S; Ok, Yong Sik; Poon, Chi Sun
2016-11-01
This study aimed to transform food waste into a value-added chemical, hydroxymethylfurfural (HMF), and unravel the tangled effects induced by the metal catalysts on each single step of the successive conversion pathway. The results showed that using cooked rice and bread crust as surrogates of starch-rich food waste, yields of 8.1-9.5% HMF and 44.2-64.8% glucose were achieved over SnCl4 catalyst. Protons released from metal hydrolysis and acidic by-products rendered Brønsted acidity to catalyze fructose dehydration and hydrolysis of glycosidic bond. Lewis acid site of metals could facilitate both fructose dehydration and glucose isomerization via promoting the rate-limiting internal hydride shift, with the catalytic activity determined by its electronegativity, electron configuration, and charge density. Lewis acid site of a higher valence also enhanced hydrolysis of polysaccharide. However, the metals also catalyzed undesirable polymerization possibly by polarizing the carbonyl groups of sugars and derivatives, which should be minimized by process optimization. Copyright © 2016 Elsevier Ltd. All rights reserved.
Novel Passive Clearing Methods for the Rapid Production of Optical Transparency in Whole CNS Tissue.
Woo, Jiwon; Lee, Eunice Yoojin; Park, Hyo-Suk; Park, Jeong Yoon; Cho, Yong Eun
2018-05-08
Since the development of CLARITY, a bioelectrochemical clearing technique that allows for three-dimensional phenotype mapping within transparent tissues, a multitude of novel clearing methodologies including CUBIC (clear, unobstructed brain imaging cocktails and computational analysis), SWITCH (system-wide control of interaction time and kinetics of chemicals), MAP (magnified analysis of the proteome), and PACT (passive clarity technique), have been established to further expand the existing toolkit for the microscopic analysis of biological tissues. The present study aims to improve upon and optimize the original PACT procedure for an array of intact rodent tissues, including the whole central nervous system (CNS), kidneys, spleen, and whole mouse embryos. Termed psPACT (process-separate PACT) and mPACT (modified PACT), these novel techniques provide highly efficacious means of mapping cell circuitry and visualizing subcellular structures in intact normal and pathological tissues. In the following protocol, we provide a detailed, step-by-step outline on how to achieve maximal tissue clearance with minimal invasion of their structural integrity via psPACT and mPACT.
NASA Astrophysics Data System (ADS)
Abdi, Abdi M.; Szu, Harold H.
2003-04-01
With the growing rate of interconnection among computer systems, network security is becoming a real challenge. Intrusion Detection System (IDS) is designed to protect the availability, confidentiality and integrity of critical network information systems. Today"s approach to network intrusion detection involves the use of rule-based expert systems to identify an indication of known attack or anomalies. However, these techniques are less successful in identifying today"s attacks. Hackers are perpetually inventing new and previously unanticipated techniques to compromise information infrastructure. This paper proposes a dynamic way of detecting network intruders on time serious data. The proposed approach consists of a two-step process. Firstly, obtaining an efficient multi-user detection method, employing the recently introduced complexity minimization approach as a generalization of a standard ICA. Secondly, we identified unsupervised learning neural network architecture based on Kohonen"s Self-Organizing Map for potential functional clustering. These two steps working together adaptively will provide a pseudo-real time novelty detection attribute to supplement the current intrusion detection statistical methodology.
Improved Goldstein Interferogram Filter Based on Local Fringe Frequency Estimation.
Feng, Qingqing; Xu, Huaping; Wu, Zhefeng; You, Yanan; Liu, Wei; Ge, Shiqi
2016-11-23
The quality of an interferogram, which is limited by various phase noise, will greatly affect the further processes of InSAR, such as phase unwrapping. Interferometric SAR (InSAR) geophysical measurements', such as height or displacement, phase filtering is therefore an essential step. In this work, an improved Goldstein interferogram filter is proposed to suppress the phase noise while preserving the fringe edges. First, the proposed adaptive filter step, performed before frequency estimation, is employed to improve the estimation accuracy. Subsequently, to preserve the fringe characteristics, the estimated fringe frequency in each fixed filtering patch is removed from the original noisy phase. Then, the residual phase is smoothed based on the modified Goldstein filter with its parameter alpha dependent on both the coherence map and the residual phase frequency. Finally, the filtered residual phase and the removed fringe frequency are combined to generate the filtered interferogram, with the loss of signal minimized while reducing the noise level. The effectiveness of the proposed method is verified by experimental results based on both simulated and real data.
Improved Goldstein Interferogram Filter Based on Local Fringe Frequency Estimation
Feng, Qingqing; Xu, Huaping; Wu, Zhefeng; You, Yanan; Liu, Wei; Ge, Shiqi
2016-01-01
The quality of an interferogram, which is limited by various phase noise, will greatly affect the further processes of InSAR, such as phase unwrapping. Interferometric SAR (InSAR) geophysical measurements’, such as height or displacement, phase filtering is therefore an essential step. In this work, an improved Goldstein interferogram filter is proposed to suppress the phase noise while preserving the fringe edges. First, the proposed adaptive filter step, performed before frequency estimation, is employed to improve the estimation accuracy. Subsequently, to preserve the fringe characteristics, the estimated fringe frequency in each fixed filtering patch is removed from the original noisy phase. Then, the residual phase is smoothed based on the modified Goldstein filter with its parameter alpha dependent on both the coherence map and the residual phase frequency. Finally, the filtered residual phase and the removed fringe frequency are combined to generate the filtered interferogram, with the loss of signal minimized while reducing the noise level. The effectiveness of the proposed method is verified by experimental results based on both simulated and real data. PMID:27886081
Adaptive color demosaicing and false color removal
NASA Astrophysics Data System (ADS)
Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria
2010-04-01
Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.
Isothermal Amplification Methods for the Detection of Nucleic Acids in Microfluidic Devices
Zanoli, Laura Maria; Spoto, Giuseppe
2012-01-01
Diagnostic tools for biomolecular detection need to fulfill specific requirements in terms of sensitivity, selectivity and high-throughput in order to widen their applicability and to minimize the cost of the assay. The nucleic acid amplification is a key step in DNA detection assays. It contributes to improving the assay sensitivity by enabling the detection of a limited number of target molecules. The use of microfluidic devices to miniaturize amplification protocols reduces the required sample volume and the analysis times and offers new possibilities for the process automation and integration in one single device. The vast majority of miniaturized systems for nucleic acid analysis exploit the polymerase chain reaction (PCR) amplification method, which requires repeated cycles of three or two temperature-dependent steps during the amplification of the nucleic acid target sequence. In contrast, low temperature isothermal amplification methods have no need for thermal cycling thus requiring simplified microfluidic device features. Here, the use of miniaturized analysis systems using isothermal amplification reactions for the nucleic acid amplification will be discussed. PMID:25587397
The Numerical Simulation of the Shock Wave of Coal Gas Explosions in Gas Pipe*
NASA Astrophysics Data System (ADS)
Chen, Zhenxing; Hou, Kepeng; Chen, Longwei
2018-03-01
For the problem of large deformation and vortex, the method of Euler and Lagrange has both advantage and disadvantage. In this paper we adopt special fuzzy interface method(volume of fluid). Gas satisfies the conditions of conservation equations of mass, momentum, and energy. Based on explosion and three-dimension fluid dynamics theory, using unsteady, compressible, inviscid hydrodynamic equations and state equations, this paper considers pressure gradient’s effects to velocity, mass and energy in Lagrange steps by the finite difference method. To minimize transport errors of material, energy and volume in Finite Difference mesh, it also considers material transport in Euler steps. Programmed with Fortran PowerStation 4.0 and visualized with the software designed independently, we design the numerical simulation of gas explosion with specific pipeline structure, check the key points of the pressure change in the flow field, reproduce the gas explosion in pipeline of shock wave propagation, from the initial development, flame and accelerate the process of shock wave. This offers beneficial reference and experience to coal gas explosion accidents or safety precautions.
Heintz, Søren; Börner, Tim; Ringborg, Rolf H; Rehn, Gustav; Grey, Carl; Nordblad, Mathias; Krühne, Ulrich; Gernaey, Krist V; Adlercreutz, Patrick; Woodley, John M
2017-03-01
An experimental platform based on scaled-down unit operations combined in a plug-and-play manner enables easy and highly flexible testing of advanced biocatalytic process options such as in situ product removal (ISPR) process strategies. In such a platform, it is possible to compartmentalize different process steps while operating it as a combined system, giving the possibility to test and characterize the performance of novel process concepts and biocatalysts with minimal influence of inhibitory products. Here the capabilities of performing process development by applying scaled-down unit operations are highlighted through a case study investigating the asymmetric synthesis of 1-methyl-3-phenylpropylamine (MPPA) using ω-transaminase, an enzyme in the sub-family of amino transferases (ATAs). An on-line HPLC system was applied to avoid manual sample handling and to semi-automatically characterize ω-transaminases in a scaled-down packed-bed reactor (PBR) module, showing MPPA as a strong inhibitor. To overcome the inhibition, a two-step liquid-liquid extraction (LLE) ISPR concept was tested using scaled-down unit operations combined in a plug-and-play manner. Through the tested ISPR concept, it was possible to continuously feed the main substrate benzylacetone (BA) and extract the main product MPPA throughout the reaction, thereby overcoming the challenges of low substrate solubility and product inhibition. The tested ISPR concept achieved a product concentration of 26.5 g MPPA · L -1 , a purity up to 70% g MPPA · g tot -1 and a recovery in the range of 80% mol · mol -1 of MPPA in 20 h, with the possibility to increase the concentration, purity, and recovery further. Biotechnol. Bioeng. 2017;114: 600-609. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Synthetic spider silk production on a laboratory scale.
Hsia, Yang; Gnesa, Eric; Pacheco, Ryan; Kohler, Kristin; Jeffery, Felicia; Vierra, Craig
2012-07-18
As society progresses and resources become scarcer, it is becoming increasingly important to cultivate new technologies that engineer next generation biomaterials with high performance properties. The development of these new structural materials must be rapid, cost-efficient and involve processing methodologies and products that are environmentally friendly and sustainable. Spiders spin a multitude of different fiber types with diverse mechanical properties, offering a rich source of next generation engineering materials for biomimicry that rival the best manmade and natural materials. Since the collection of large quantities of natural spider silk is impractical, synthetic silk production has the ability to provide scientists with access to an unlimited supply of threads. Therefore, if the spinning process can be streamlined and perfected, artificial spider fibers have the potential use for a broad range of applications ranging from body armor, surgical sutures, ropes and cables, tires, strings for musical instruments, and composites for aviation and aerospace technology. In order to advance the synthetic silk production process and to yield fibers that display low variance in their material properties from spin to spin, we developed a wet-spinning protocol that integrates expression of recombinant spider silk proteins in bacteria, purification and concentration of the proteins, followed by fiber extrusion and a mechanical post-spin treatment. This is the first visual representation that reveals a step-by-step process to spin and analyze artificial silk fibers on a laboratory scale. It also provides details to minimize the introduction of variability among fibers spun from the same spinning dope. Collectively, these methods will propel the process of artificial silk production, leading to higher quality fibers that surpass natural spider silks.
Synthetic Spider Silk Production on a Laboratory Scale
Hsia, Yang; Gnesa, Eric; Pacheco, Ryan; Kohler, Kristin; Jeffery, Felicia; Vierra, Craig
2012-01-01
As society progresses and resources become scarcer, it is becoming increasingly important to cultivate new technologies that engineer next generation biomaterials with high performance properties. The development of these new structural materials must be rapid, cost-efficient and involve processing methodologies and products that are environmentally friendly and sustainable. Spiders spin a multitude of different fiber types with diverse mechanical properties, offering a rich source of next generation engineering materials for biomimicry that rival the best manmade and natural materials. Since the collection of large quantities of natural spider silk is impractical, synthetic silk production has the ability to provide scientists with access to an unlimited supply of threads. Therefore, if the spinning process can be streamlined and perfected, artificial spider fibers have the potential use for a broad range of applications ranging from body armor, surgical sutures, ropes and cables, tires, strings for musical instruments, and composites for aviation and aerospace technology. In order to advance the synthetic silk production process and to yield fibers that display low variance in their material properties from spin to spin, we developed a wet-spinning protocol that integrates expression of recombinant spider silk proteins in bacteria, purification and concentration of the proteins, followed by fiber extrusion and a mechanical post-spin treatment. This is the first visual representation that reveals a step-by-step process to spin and analyze artificial silk fibers on a laboratory scale. It also provides details to minimize the introduction of variability among fibers spun from the same spinning dope. Collectively, these methods will propel the process of artificial silk production, leading to higher quality fibers that surpass natural spider silks. PMID:22847722
Risk assessment and prioritization
DOT National Transportation Integrated Search
2003-01-01
The first step to take in order to prevent and minimize the dangers of disasters or attacks, is risk assessment, followed closely by prioritization. This article discusses key vulnerability and risk assessment that Volpe Center has conducted in suppo...
Ionization-Enhanced Decomposition of 2,4,6-Trinitrotoluene (TNT) Molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bin; Wright, David; Cliffel, David
2011-01-01
The unimolecular decomposition reaction of TNT can in principle be used to design ways to either detect or remove TNT from the environment. Here, we report the results of a density functional theory study of possible ways to lower the reaction barrier for this decomposition process by ionization, so that decomposition and/or detection can occur at room temperature. We find that ionizing TNT lowers the reaction barrier for the initial step of this decomposition. We further show that a similar effect can occur if a positive moiety is bound to the TNT molecule. The positive charge produces a pronounced electronmore » redistribution and dipole formation in TNT with minimal charge transfer from TNT to the positive moiety.« less
Phase retrieval algorithm for JWST Flight and Testbed Telescope
NASA Astrophysics Data System (ADS)
Dean, Bruce H.; Aronstein, David L.; Smith, J. Scott; Shiri, Ron; Acton, D. Scott
2006-06-01
An image-based wavefront sensing and control algorithm for the James Webb Space Telescope (JWST) is presented. The algorithm heritage is discussed in addition to implications for algorithm performance dictated by NASA's Technology Readiness Level (TRL) 6. The algorithm uses feedback through an adaptive diversity function to avoid the need for phase-unwrapping post-processing steps. Algorithm results are demonstrated using JWST Testbed Telescope (TBT) commissioning data and the accuracy is assessed by comparison with interferometer results on a multi-wave phase aberration. Strategies for minimizing aliasing artifacts in the recovered phase are presented and orthogonal basis functions are implemented for representing wavefronts in irregular hexagonal apertures. Algorithm implementation on a parallel cluster of high-speed digital signal processors (DSPs) is also discussed.
Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging.
Carasso, Alfred S; Vladár, András E
2014-01-01
This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by 'slow motion' low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected 'fast scan' frames. The paper includes software routines, written in Interactive Data Language (IDL),(1) that can perform the above image processing tasks.
International Launch Vehicle Selection for Interplanetary Travel
NASA Technical Reports Server (NTRS)
Ferrone, Kristine; Nguyen, Lori T.
2010-01-01
In developing a mission strategy for interplanetary travel, the first step is to consider launch capabilities which provide the basis for fundamental parameters of the mission. This investigation focuses on the numerous launch vehicles of various characteristics available and in development internationally with respect to upmass, launch site, payload shroud size, fuel type, cost, and launch frequency. This presentation will describe launch vehicles available and in development worldwide, then carefully detail a selection process for choosing appropriate vehicles for interplanetary missions focusing on international collaboration, risk management, and minimization of cost. The vehicles that fit the established criteria will be discussed in detail with emphasis on the specifications and limitations related to interplanetary travel. The final menu of options will include recommendations for overall mission design and strategy.
Radiation exposure of patient and surgeon in minimally invasive kidney stone surgery.
Demirci, A; Raif Karabacak, O; Yalçınkaya, F; Yiğitbaşı, O; Aktaş, C
2016-05-01
Percutaneous nephrolithotomy (PNL) and retrograde intrarenal surgery (RIRS) are the standard treatments used in the endoscopic treatment of kidney stones depending on the location and the size of the stone. The purpose of the study was to show the radiation exposure difference between the minimally invasive techniques by synchronously measuring the amount of radiation the patients and the surgeon received in each session, which makes our study unique. This is a prospective study which included 20 patients who underwent PNL, and 45 patients who underwent RIRS in our clinic between June 2014 and October 2014. The surgeries were assessed by dividing them into three steps: step 1: the access sheath or ureter catheter placement, step 2: lithotripsy and collection of fragments, and step 3: DJ catheter or re-entry tube insertion. For the PNL and RIRS groups, mean stone sizes were 30mm (range 16-60), and 12mm (range 7-35); mean fluoroscopy times were 337s (range 200-679), and 37s (range 7-351); and total radiation exposures were 142mBq (44.7 to 221), and 4.4mBq (0.2 to 30) respectively. Fluoroscopy times and radiation exposures at each step were found to be higher in the PNL group compared to the RIRS group. When assessed in itself, the fluoroscopy time and radiation exposure were stable in RIRS, and the radiation exposure was the highest in step 1 and the lowest in step 3 in PNL. When assessed for the 19 PNL patients and the 12 RIRS patients who had stone sizes≥2cm, the fluoroscopy time in step 1, and the radiation exposure in steps 1 and 2 were found to be higher in the PNL group than the RIRS group (P<0.001). Although there is need for more prospective randomized studies, RIRS appears to be a viable alternate for PNL because it has short fluoroscopy time and the radiation exposure is low in every step. 4. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Waldner, M H; Halter, R; Sigg, A; Brosch, B; Gehrmann, H J; Keunecke, M
2013-02-01
Traditionally EfW (Energy from Waste) plants apply a reciprocating grate to combust waste fuel. An integrated steam generator recovers the heat of combustion and converts it to steam for use in a steam turbine/generator set. This is followed by an array of flue gas cleaning technologies to meet regulatory limitations. Modern combustion applies a two-step method using primary air to fuel the combustion process on the grate. This generates a complex mixture of pyrolysis gases, combustion gases and unused combustion air. The post-combustion step in the first pass of the boiler above the grate is intended to "clean up" this mixture by oxidizing unburned gases with secondary air. This paper describes modifications to the combustion process to minimize exhaust gas volumes and the generation of noxious gases and thus improving the overall thermal efficiency of the EfW plant. The resulting process can be coupled with an innovative SNCR (Selective Non-Catalytic Reduction) technology to form a clean and efficient solid waste combustion system. Measurements immediately above the grate show that gas compositions along the grate vary from 10% CO, 5% H(2) and 0% O(2) to essentially unused "pure" air, in good agreement with results from a mathematical model. Introducing these diverse gas compositions to the post combustion process will overwhelm its ability to process all these gas fractions in an optimal manner. Inserting an intermediate step aimed at homogenizing the mixture above the grate has shown to significantly improve the quality of combustion, allowing for optimized process parameters. These measures also resulted in reduced formation of NO(x) (nitrogenous oxides) due to a lower oxygen level at which the combustion process was run (2.6 vol% O(2,)(wet) instead of 6.0 vol% O(2,)(wet)). This reduction establishes optimal conditions for the DyNOR™ (Dynamic NO(x) Reduction) NO(x) reduction process. This innovative SNCR technology is adapted to situations typically encountered in solid fuel combustion. DyNOR™ measures temperature in small furnace segments and delivers the reducing reagent to the exact location where it is most effective. The DyNOR™ distributor reacts precisely and dynamically to rapid changes in combustion conditions, resulting in very low NO(x) emissions from the stack. Copyright © 2012 Elsevier Ltd. All rights reserved.
Nasreddine, Lara; Tamim, Hani; Itani, Leila; Nasrallah, Mona P; Isma'eel, Hussain; Nakhoul, Nancy F; Abou-Rizk, Joana; Naja, Farah
2018-01-01
To (i) estimate the consumption of minimally processed, processed and ultra-processed foods in a sample of Lebanese adults; (ii) explore patterns of intakes of these food groups; and (iii) investigate the association of the derived patterns with cardiometabolic risk. Cross-sectional survey. Data collection included dietary assessment using an FFQ and biochemical, anthropometric and blood pressure measurements. Food items were categorized into twenty-five groups based on the NOVA food classification. The contribution of each food group to total energy intake (TEI) was estimated. Patterns of intakes of these food groups were examined using exploratory factor analysis. Multivariate logistic regression analysis was used to evaluate the associations of derived patterns with cardiometabolic risk factors. Greater Beirut area, Lebanon. Adults ≥18 years (n 302) with no prior history of chronic diseases. Of TEI, 36·53 and 27·10 % were contributed by ultra-processed and minimally processed foods, respectively. Two dietary patterns were identified: the 'ultra-processed' and the 'minimally processed/processed'. The 'ultra-processed' consisted mainly of fast foods, snacks, meat, nuts, sweets and liquor, while the 'minimally processed/processed' consisted mostly of fruits, vegetables, legumes, breads, cheeses, sugar and fats. Participants in the highest quartile of the 'minimally processed/processed' pattern had significantly lower odds for metabolic syndrome (OR=0·18, 95 % CI 0·04, 0·77), hyperglycaemia (OR=0·25, 95 % CI 0·07, 0·98) and low HDL cholesterol (OR=0·17, 95 % CI 0·05, 0·60). The study findings may be used for the development of evidence-based interventions aimed at encouraging the consumption of minimally processed foods.
Evaluation of Bosch-Based Systems Using Non-Traditional Catalysts at Reduced Temperatures
NASA Technical Reports Server (NTRS)
Abney, Morgan B.; Mansell, J. Matthew
2011-01-01
Oxygen and water resupply make open loop atmosphere revitalization (AR) systems unfavorable for long-term missions beyond low Earth orbit. Crucial to closing the AR loop are carbon dioxide reduction systems with low mass and volume, minimal power requirements, and minimal consumables. For this purpose, NASA is exploring using Bosch-based systems. The Bosch process is favorable over state-of-the-art Sabatier-based processes due to complete loop closure. However, traditional operation of the Bosch required high reaction temperatures, high recycle rates, and significant consumables in the form of catalyst resupply due to carbon fouling. A number of configurations have been proposed for next-generation Bosch systems. First, alternative catalysts (catalysts other than steel wool) can be used in a traditional single-stage Bosch reactor to improve reaction kinetics and increase carbon packing density. Second, the Bosch reactor may be split into separate stages wherein the first reactor stage is dedicated to carbon monoxide and water formation via the reverse water-gas shift reaction and the second reactor stage is dedicated to carbon formation. A series system will enable maximum efficiency of both steps of the Bosch reaction, resulting in optimized operation and maximum carbon formation rate. This paper details the results of testing of both single-stage and two-stage Bosch systems with alternative catalysts at reduced temperatures. These results are compared to a traditional Bosch system operated with a steel wool catalyst.
Trash-to-Gas: Converting Space Trash into Useful Products
NASA Technical Reports Server (NTRS)
Caraccio, Anne J.; Hintze, Paul E.
2013-01-01
NASA's Logistical Reduction and Repurposing (LRR) project is a collaborative effort in which NASA is determined to reduce total logistical mass through reduction, reuse and recycling of various wastes and components of long duration space missions and habitats. LRR is focusing on four distinct advanced areas of study: Advanced Clothing System, Logistics-to-Living, Heat Melt Compactor and Trash to Supply Gas (TtSG). The objective of TtSG is to develop technologies that convert material waste, human waste and food waste into high-value products. High-value products include life support oxygen and water, rocket fuels, raw material production feedstocks, and other energy sources. There are multiple pathways for converting waste to products involving single or multi-step processes. This paper discusses thermal oxidation methods of converting waste to methane. Different wastes, including food, food packaging, Maximum Absorbent Garments (MAGs), human waste simulants, and cotton washcloths have been evaluated in a thermal degradation reactor under conditions promoting pyrolysis, gasification or incineration. The goal was to evaluate the degradation processes at varying temperatures and ramp cycles and to maximize production of desirable products and minimize high molecular weight hydrocarbon (tar) production. Catalytic cracking was also evaluated to minimize tar production. The quantities of CO2, CO, CH4, and H2O were measured under the different thermal degradation conditions. The conversion efficiencies of these products were used to determine the best methods for producing desired products.
Trash to Gas: Converting Space Trash into Useful Products
NASA Technical Reports Server (NTRS)
Nur, Mononita
2013-01-01
NASA's Logistical Reduction and Repurposing (LRR) project is a collaborative effort in which NASA is determined to reduce total logistical mass through reduction, reuse and recycling of various wastes and components of long duration space missions and habitats. LRR is focusing on four distinct advanced areas of study: Advanced Clothing System, Logistics-to-Living, Heat Melt Compactor and Trash to Supply Gas (TtSG). The objective of TtSG is to develop technologies that convert material waste, human waste and food waste into high-value products. High-value products include life support oxygen and water, rocket fuels, raw material production feedstocks, and other energy sources. There are multiple pathways for converting waste to products involving single or multi-step processes. This paper discusses thermal oxidation methods of converting waste to methane. Different wastes, including food, food packaging, Maximum Absorbent Garments (MAGs), human waste simulants, and cotton washcloths have been evaluated in a thermal degradation reactor under conditions promoting pyrolysis, gasification or incineration. The goal was to evaluate the degradation processes at varying temperatures and ramp cycles and to maximize production of desirable products and minimize high molecular weight hydrocarbon (tar) production. Catalytic cracking was also evaluated to minimize tar production. The quantities of C02, CO, CH4, and H20 were measured under the different thermal degradation conditions. The conversion efficiencies of these products were used to determine the best methods for producing desired products.
Rienzi, L; Bariani, F; Dalla Zorza, M; Albani, E; Benini, F; Chamayou, S; Minasi, M G; Parmegiani, L; Restelli, L; Vizziello, G; Costa, A Nanni
2017-08-01
Can traceability of gametes and embryos be ensured during IVF? The use of a simple and comprehensive traceability system that includes the most susceptible phases during the IVF process minimizes the risk of mismatches. Mismatches in IVF are very rare but unfortunately possible with dramatic consequences for both patients and health care professionals. Traceability is thus a fundamental aspect of the treatment. A clear process of patient and cell identification involving witnessing protocols has to be in place in every unit. To identify potential failures in the traceability process and to develop strategies to mitigate the risk of mismatches, previously failure mode and effects analysis (FMEA) has been used effectively. The FMEA approach is however a subjective analysis, strictly related to specific protocols and thus the results are not always widely applicable. To reduce subjectivity and to obtain a widespread comprehensive protocol of traceability, a multicentre centrally coordinated FMEA was performed. Seven representative Italian centres (three public and four private) were selected. The study had a duration of 21 months (from April 2015 to December 2016) and was centrally coordinated by a team of experts: a risk analysis specialist, an expert embryologist and a specialist in human factor. Principal investigators of each centre were first instructed about proactive risk assessment and FMEA methodology. A multidisciplinary team to perform the FMEA analysis was then formed in each centre. After mapping the traceability process, each team identified the possible causes of mistakes in their protocol. A risk priority number (RPN) for each identified potential failure mode was calculated. The results of the FMEA analyses were centrally investigated and consistent corrective measures suggested. The teams performed new FMEA analyses after the recommended implementations. In each centre, this study involved: the laboratory director, the Quality Control & Quality Assurance responsible, Embryologist(s), Gynaecologist(s), Nurse(s) and Administration. The FMEA analyses were performed according to the Joint Commission International. The FMEA teams identified seven main process phases: oocyte collection, sperm collection, gamete processing, insemination, embryo culture, embryo transfer and gamete/embryo cryopreservation. A mean of 19.3 (SD ± 5.8) associated process steps and 41.9 (SD ± 12.4) possible failure modes were recognized per centre. A RPN ≥15 was calculated in a mean of 6.4 steps (range 2-12, SD ± 3.60). A total of 293 failure modes were centrally analysed 45 of which were considered at medium/high risk. After consistent corrective measures implementation and re-evaluation, a significant reduction in the RPNs in all centres (RPN <15 for all steps) was observed. A simple and comprehensive traceability system was designed as the result of the seven FMEA analyses. The validity of FMEA is in general questionable due to the subjectivity of the judgments. The design of this study has however minimized this risk by introducing external experts for the analysis of the FMEA results. Specific situations such as sperm/oocyte donation, import/export and pre-implantation genetic testing were not taken into consideration. Finally, this study is only limited to the analysis of failure modes that may lead to mismatches, other possible procedural mistakes are not accounted for. Every single IVF centre should have a clear and reliable protocol for identification of patients and traceability of cells during manipulation. The results of this study can support IVF groups in better recognizing critical steps in their protocols, understanding identification and witnessing process, and in turn enhancing safety by introducing validated corrective measures. This study was designed by the Italian Society of Embryology Reproduction and Research (SIERR) and funded by the Italian National Transplant Centre (CNT) of the Italian National Institute of Health (ISS). The authors have no conflicts of interest. N/A. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
How many steps/day are enough? For adults.
Tudor-Locke, Catrine; Craig, Cora L; Brown, Wendy J; Clemes, Stacy A; De Cocker, Katrien; Giles-Corti, Billie; Hatano, Yoshiro; Inoue, Shigeru; Matsudo, Sandra M; Mutrie, Nanette; Oppert, Jean-Michel; Rowe, David A; Schmidt, Michael D; Schofield, Grant M; Spence, John C; Teixeira, Pedro J; Tully, Mark A; Blair, Steven N
2011-07-28
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day.
How many steps/day are enough? for adults
2011-01-01
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day. PMID:21798015
Nanotip Carpets as Antireflection Surfaces
NASA Technical Reports Server (NTRS)
Bae, Youngsam; Mobasser, Sohrab; Manohara, Harish; Lee, Choonsup
2008-01-01
Carpet-like random arrays of metal-coated silicon nanotips have been shown to be effective as antireflection surfaces. Now undergoing development for incorporation into Sun sensors that would provide guidance for robotic exploratory vehicles on Mars, nanotip carpets of this type could also have many uses on Earth as antireflection surfaces in instruments that handle or detect ultraviolet, visible, or infrared light. In the original Sun-sensor application, what is required is an array of 50-micron-diameter apertures on what is otherwise an opaque, minimally reflective surface, as needed to implement a miniature multiple-pinhole camera. The process for fabrication of an antireflection nanotip carpet for this application (see Figure 1) includes, and goes somewhat beyond, the process described in A New Process for Fabricating Random Silicon Nanotips (NPO-40123), NASA Tech Briefs, Vol. 28, No. 1 (November 2004), page 62. In the first step, which is not part of the previously reported process, photolithography is performed to deposit etch masks to define the 50-micron apertures on a silicon substrate. In the second step, which is part of the previously reported process, the non-masked silicon area between the apertures is subjected to reactive ion etching (RIE) under a special combination of conditions that results in the growth of fluorine-based compounds in randomly distributed formations, known in the art as "polymer RIE grass," that have dimensions of the order of microns. The polymer RIE grass formations serve as microscopic etch masks during the next step, in which deep reactive ion etching (DRIE) is performed. What remains after DRIE is the carpet of nano - tips, which are high-aspect-ratio peaks, the tips of which have radii of the order of nanometers. Next, the nanotip array is evaporatively coated with Cr/Au to enhance the absorption of light (more specifically, infrared light in the Sun-sensor application). The photoresist etch masks protecting the apertures are then removed by dipping the substrate into acetone. Finally, for the Sun-sensor application, the back surface of the substrate is coated with a 57-nm-thick layer of Cr for attenuation of sunlight.
More realistic power estimation for new user, active comparator studies: an empirical example.
Gokhale, Mugdha; Buse, John B; Pate, Virginia; Marquis, M Alison; Stürmer, Til
2016-04-01
Pharmacoepidemiologic studies are often expected to be sufficiently powered to study rare outcomes, but there is sequential loss of power with implementation of study design options minimizing bias. We illustrate this using a study comparing pancreatic cancer incidence after initiating dipeptidyl-peptidase-4 inhibitors (DPP-4i) versus thiazolidinediones or sulfonylureas. We identified Medicare beneficiaries with at least one claim of DPP-4i or comparators during 2007-2009 and then applied the following steps: (i) exclude prevalent users, (ii) require a second prescription of same drug, (iii) exclude prevalent cancers, (iv) exclude patients age <66 years and (v) censor for treatment changes during follow-up. Power to detect hazard ratios (effect measure strongly driven by the number of events) ≥ 2.0 estimated after step 5 was compared with the naïve power estimated prior to step 1. There were 19,388 and 28,846 DPP-4i and thiazolidinedione initiators during 2007-2009. The number of drug initiators dropped most after requiring a second prescription, outcomes dropped most after excluding patients with prevalent cancer and person-time dropped most after requiring a second prescription and as-treated censoring. The naïve power (>99%) was considerably higher than the power obtained after the final step (~75%). In designing new-user active-comparator studies, one should be mindful how steps minimizing bias affect sample-size, number of outcomes and person-time. While actual numbers will depend on specific settings, application of generic losses in percentages will improve estimates of power compared with the naive approach mostly ignoring steps taken to increase validity. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruce J. Mincher; Guiseppe Modolo; Strephen P. Mezyk
2009-01-01
Solvent extraction is the most commonly used process scale separation technique for nuclear applications and it benefits from more than 60 years of research and development and proven experience at the industrial scale. Advanced solvent extraction processes for the separation of actinides and fission products from dissolved nuclear fuel are now being investigated worldwide by numerous groups (US, Europe, Russia, Japan etc.) in order to decrease the radiotoxic inventories of nuclear waste. While none of the advanced processes have yet been implemented at the industrial scale their development studies have sometimes reached demonstration tests at the laboratory scale. Most ofmore » the partitioning strategies rely on the following four separations: 1. Partitioning of uranium and/or plutonium from spent fuel dissolution liquors. 2. Separation of the heat generating fission products such as strontium and cesium. 3. Coextraction of the trivalent actinides and lanthanides. 4. Separation of the trivalent actinides from the trivalent lanthanides. Tributylphosphate (TBP) in the first separation is the basis of the PUREX, UREX and COEX processes, developed in Europe and the US, whereas monoamides as alternatives for TBP are being developed in Japan and India. For the second separation, many processes were developed worldwide, including the use of crown-ether extractants, like the FPEX process developed in the USA, and the CCD-PEG process jointly developed in the USA and Russia for the partitioning of cesium and strontium. In the third separation, phosphine oxides (CMPOs), malonamides, and diglycolamides are used in the TRUEX, DIAMEX and the ARTIST processes, respectively developed in US, Europe and Japan. Trialkylphosphine oxide(TRPO) developed in China, or UNEX (a mixture of several extractants) jointly developed in Russia and the USA allow all actinides to be co-extracted from acidic radioactive liquid waste. For the final separation, soft donor atom-containing ligands such as the bistriazinylbipyridines (BTBPs) or dithiophosphinic acids have been developed in Europe and China to selectively extract the trivalent actinides. However, in the TALSPEAK process developed in the USA, the separation is based on the relatively high affinity of aminopolycarboxylic acid complexants such as DTPA for trivalent actinides over lanthanides. In the DIDPA, SETFICS and the GANEX processes, developed in Japan and France, the group separation is accomplished in a reverse TALSPEAK process. A typical scenario is shown in Figure 1 for the UREX1a (Uranium Extraction version 1a) process. The initial step is the TBP extraction for the separation of recyclable uranium. The second step partitions the short-lived, highly radioactive cesium and strontium to minimize heat loading in the high-level waste repository. The third step is a group separation of the trivalent actinides and lanthanides with the last step being partitioning of the trivalent lanthanides from the actinides.« less
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Technical Reports Server (NTRS)
Choi, K.-Y.; Dulikravich, G. S.
1993-01-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Astrophysics Data System (ADS)
Choi, K.-Y.; Dulikravich, G. S.
1993-11-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.
Liu, Li; Lin, Weikai; Jin, Mingwu
2015-01-01
In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.
Overview and fundamentals of urologic robot-integrated systems.
Allaf, Mohamad; Patriciu, Alexandru; Mazilu, Dumitru; Kavoussi, Louis; Stoianovici, Dan
2004-11-01
Advances in technology have revolutionized urology. Minimally invasive tools now form the core of the urologist's armamentarium. Laparoscopic surgery has become the favored approach for treating many complicated urologic ailments. Surgical robots represent the next evolutionary step in the fruitful man-machine partnership. The introduction of robotic technology in urology changes how urologists learn, teach, plan, and operate. As technology evolves, robots not only will improve performance in minimally invasive procedures, but also enhance other procedures or enable new kinds of operations.
Jackson, George W; Willson, Richard
2005-11-01
A "column-format" preparative electrophoresis device which obviates the need for gel extraction or secondary electro-elution steps is described. Separated biomolecules are continuously detected and eluted directly into a minimal volume of free solution for subsequent use. An optical fiber allows the species of interest to be detected just prior to elution from the gel column, and a small collection volume is created by addition of an ion-exchange membrane near the end of the column.
NASA Technical Reports Server (NTRS)
Hepp, Aloysius F.; Kulis, Michael J.; Psarras, Peter C.; Ball, David W.; Timko, Michael T.; Wong, Hsi-Wu; Peck, Jay; Chianelli, Russell R.
2014-01-01
Transportation fuels production (including aerospace propellants) from non-traditional sources (gases, waste materials, and biomass) has been an active area of research and development for decades. Reducing terrestrial waste streams simultaneous with energy conversion, plentiful biomass, new low-cost methane sources, and/or extra-terrestrial resource harvesting and utilization present significant technological and business opportunities being realized by a new generation of visionary entrepreneurs. We examine several new approaches to catalyst fabrication and new processing technologies to enable utilization of these nontraditional raw materials. Two basic processing architectures are considered: a single-stage pyrolysis approach that seeks to basically re-cycle hydrocarbons with minimal net chemistry or a two-step paradigm that involves production of supply or synthesis gas (mainly carbon oxides and H2) followed by production of fuel(s) via Sabatier or methanation reactions and/or Fischer-Tröpsch synthesis. Optimizing the fraction of product stream relevant to targeted aerospace (and other transportation) fuels via modeling, catalyst fabrication and novel reactor design are described. Energy utilization is a concern for production of fuels for either terrestrial or space operations; renewable sources based on solar energy and/or energy efficient processes may be mission enabling. Another important issue is minimizing impurities in the product stream(s), especially those potentially posing risks to personnel or operations through (catalyst) poisoning or (equipment) damage. Technologies being developed to remove (and/or recycle) heteroatom impurities are briefly discussed as well as the development of chemically robust catalysts whose activities are not diminished during operation. The potential impacts on future missions by such new approaches as well as balance of system issues are addressed.
NASA Technical Reports Server (NTRS)
Hepp, A. F.; Kulis, M. J.; Psarras, P. C.; Ball, D. W.; Timko, M. T.; Wong, H.-W.; Peck, J.; Chianelli, R. R.
2014-01-01
Transportation fuels production (including aerospace propellants) from non-traditional sources (gases, waste materials, and biomass) has been an active area of research and development for decades. Reducing terrestrial waste streams simultaneous with energy conversion, plentiful biomass, new low-cost methane sources, and/or extra-terrestrial resource harvesting and utilization present significant technological and business opportunities being realized by a new generation of visionary entrepreneurs. We examine several new approaches to catalyst fabrication and new processing technologies to enable utilization of these non-traditional raw materials. Two basic processing architectures are considered: a single-stage pyrolysis approach that seeks to basically re-cycle hydrocarbons with minimal net chemistry or a two-step paradigm that involves production of supply or synthesis gas (mainly carbon oxides and hydrogen) followed by production of fuel(s) via Sabatier or methanation reactions and/or Fischer-Tropsch synthesis. Optimizing the fraction of product stream relevant to targeted aerospace (and other transportation) fuels via modeling, catalyst fabrication and novel reactor design are described. Energy utilization is a concern for production of fuels for either terrestrial or space operations; renewable sources based on solar energy and/or energy efficient processes may be mission enabling. Another important issue is minimizing impurities in the product stream(s), especially those potentially posing risks to personnel or operations through (catalyst) poisoning or (equipment) damage. Technologies being developed to remove (and/or recycle) heteroatom impurities are briefly discussed as well as the development of chemically robust catalysts whose activity are not diminished during operation. The potential impacts on future missions by such new approaches as well as balance of system issues are addressed.
Radawski, Christine; Morrato, Elaine; Hornbuckle, Kenneth; Bahri, Priya; Smith, Meredith; Juhaeri, Juhaeri; Mol, Peter; Levitan, Bennett; Huang, Han-Yao; Coplan, Paul; Li, Hu
2015-12-01
Optimizing a therapeutic product's benefit-risk profile is an on-going process throughout the product's life cycle. Different, yet related, benefit-risk assessment strategies and frameworks are being developed by various regulatory agencies, industry groups, and stakeholders. This paper summarizes current best practices and discusses the role of the pharmacoepidemiologist in these activities, taking a life-cycle approach to integrated Benefit-Risk Assessment, Communication, and Evaluation (BRACE). A review of the medical and regulatory literature was performed for the following steps involved in therapeutic benefit-risk optimization: benefit-risk evidence generation; data integration and analysis; decision making; regulatory and policy decision making; benefit-risk communication and risk minimization; and evaluation. Feedback from International Society for Pharmacoepidemiology members was solicited on the role of the pharmacoepidemiologist. The case example of natalizumab is provided to illustrate the cyclic nature of the benefit-risk optimization process. No single, globally adopted benefit-risk assessment process exists. The BRACE heuristic offers a way to clarify research needs and to promote best practices in a cyclic and integrated manner and highlight the critical importance of cross-disciplinary input. Its approach focuses on the integration of BRACE activities for risk minimization and optimization of the benefit-risk profile. The activities defined in the BRACE heuristic contribute to the optimization of the benefit-risk profile of therapeutic products in the clinical world at both the patient and population health level. With interdisciplinary collaboration, pharmacoepidemiologists are well suited for bringing in methodology expertise, relevant research, and public health perspectives into the BRACE process. Copyright © 2015 John Wiley & Sons, Ltd.
Agrawal, Anjali M; Dudhedia, Mayur S; Zimny, Ewa
2016-02-01
The objective of the study was to develop an amorphous solid dispersion (ASD) for an insoluble compound X by hot melt extrusion (HME) process. The focus was to identify material-sparing approaches to develop bioavailable and stable ASD including scale up of HME process using minimal drug. Mixtures of compound X and polymers with and without surfactants or pH modifiers were evaluated by hot stage microscopy (HSM), polarized light microscopy (PLM), and modulated differential scanning calorimetry (mDSC), which enabled systematic selection of ASD components. Formulation blends of compound X with PVP K12 and PVP VA64 polymers were extruded through a 9-mm twin screw mini-extruder. Physical characterization of extrudates by PLM, XRPD, and mDSC indicated formation of single-phase ASD's. Accelerated stability testing was performed that allowed rapid selection of stable ASD's and suitable packaging configurations. Dissolution testing by a discriminating two-step non-sink dissolution method showed 70-80% drug release from prototype ASD's, which was around twofold higher compared to crystalline tablet formulations. The in vivo pharmacokinetic study in dogs showed that bioavailability from ASD of compound X with PVP VA64 was four times higher compared to crystalline tablet formulations. The HME process was scaled up from lab scale to clinical scale using volumetric scale up approach and scale-independent-specific energy parameter. The present study demonstrated systematic development of ASD dosage form and scale up of HME process to clinical scale using minimal drug (∼500 g), which allowed successful clinical batch manufacture of enabled formulation within 7 months.
A new numerical method for calculating extrema of received power for polarimetric SAR
Zhang, Y.; Zhang, Jiahua; Lu, Z.; Gong, W.
2009-01-01
A numerical method called cross-step iteration is proposed to calculate the maximal/minimal received power for polarized imagery based on a target's Kennaugh matrix. This method is much more efficient than the systematic method, which searches for the extrema of received power by varying the polarization ellipse angles of receiving and transmitting polarizations. It is also more advantageous than the Schuler method, which has been adopted by the PolSARPro package, because the cross-step iteration method requires less computation time and can derive both the maximal and minimal received powers, whereas the Schuler method is designed to work out only the maximal received power. The analytical model of received-power optimization indicates that the first eigenvalue of the Kennaugh matrix is the supremum of the maximal received power. The difference between these two parameters reflects the depolarization effect of the target's backscattering, which might be useful for target discrimination. ?? 2009 IEEE.
The Strategic WAste Minimization Initiative (SWAMI) Software, Version 2.0 is a tool for using process analysis for identifying waste minimization opportunities within an industrial setting. The software requires user-supplied information for process definition, as well as materia...
Prediction of Combustion Instability with Detailed Chemical Kinetics
2014-12-01
global reaction where the fuel and oxidizer react to form water and carbon dioxide . The production of carbon monoxide is a known intermediate step in... radially inward on a sleeve which turns the flow in the axial direction with minimal swirl. The oxidizer is decomposed hydrogen peroxide, which is...unburnt mixture in the vicinity of the recirculating hot products. This also causes the hot gas near the back step to move radially inwards, close to the
Roos, Margaret A; Reisman, Darcy S; Hicks, Gregory; Rose, William; Rudolph, Katherine S
2016-01-01
Adults with stroke have difficulty avoiding obstacles when walking, especially when a time constraint is imposed. The Four Square Step Test (FSST) evaluates dynamic balance by requiring individuals to step over canes in multiple directions while being timed, but many people with stroke are unable to complete it. The purposes of this study were to (1) modify the FSST by replacing the canes with tape so that more persons with stroke could successfully complete the test and (2) examine the reliability and validity of the modified version. Fifty-five subjects completed the Modified FSST (mFSST) by stepping over tape in all four directions while being timed. The mFSST resulted in significantly greater numbers of subjects completing the test than the FSST (39/55 [71%] and 33/55 [60%], respectively) (p < 0.04). The test-retest, intrarater, and interrater reliability of the mFSST were excellent (intraclass correlation coefficient ranges: 0.81-0.99). Construct and concurrent validity of the mFSST were also established. The minimal detectable change was 6.73 s. The mFSST, an ideal measure of dynamic balance, can identify progress in people with stroke in varied settings and can be completed by a wide range of people with stroke in approximately 5 min with the use of minimal equipment (tape, stop watch).
Borukhovich, Efim; Du, Guanxing; Stratmann, Matthias; Boeff, Martin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo
2016-01-01
Martensitic steels form a material class with a versatile range of properties that can be selected by varying the processing chain. In order to study and design the desired processing with the minimal experimental effort, modeling tools are required. In this work, a full processing cycle from quenching over tempering to mechanical testing is simulated with a single modeling framework that combines the features of the phase-field method and a coupled chemo-mechanical approach. In order to perform the mechanical testing, the mechanical part is extended to the large deformations case and coupled to crystal plasticity and a linear damage model. The quenching process is governed by the austenite-martensite transformation. In the tempering step, carbon segregation to the grain boundaries and the resulting cementite formation occur. During mechanical testing, the obtained material sample undergoes a large deformation that leads to local failure. The initial formation of the damage zones is observed to happen next to the carbides, while the final damage morphology follows the martensite microstructure. This multi-scale approach can be applied to design optimal microstructures dependent on processing and materials composition. PMID:28773791
Dipasquale, L; Adessi, A; d'Ippolito, G; Rossi, F; Fontana, A; De Philippis, R
2015-01-01
Two-stage process based on photofermentation of dark fermentation effluents is widely recognized as the most effective method for biological production of hydrogen from organic substrates. Recently, it was described an alternative mechanism, named capnophilic lactic fermentation, for sugar fermentation by the hyperthermophilic bacterium Thermotoga neapolitana in CO2-rich atmosphere. Here, we report the first application of this novel process to two-stage biological production of hydrogen. The microbial system based on T. neapolitana DSM 4359(T) and Rhodopseudomonas palustris 42OL gave 9.4 mol of hydrogen per mole of glucose consumed during the anaerobic process, which is the best production yield so far reported for conventional two-stage batch cultivations. The improvement of hydrogen yield correlates with the increase in lactic production during capnophilic lactic fermentation and takes also advantage of the introduction of original conditions for culturing both microorganisms in minimal media based on diluted sea water. The use of CO2 during the first step of the combined process establishes a novel strategy for biohydrogen technology. Moreover, this study opens the way to cost reduction and use of salt-rich waste as feedstock.
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
Innovative dual-step management of semi-aerobic landfill in a tropical climate.
Lavagnolo, Maria Cristina; Grossule, Valentina; Raga, Roberto
2018-04-01
Despite concerted efforts to innovate the solid waste management (SWM) system, land disposal continues to represent the most widely used technology in the treatment of urban solid waste worldwide. On the other hand, landfilling is an unavoidable step in closing the material cycle, since final residues, although minimized, need to be safely disposed of and confined. In recent years, the implementation of more sustainable landfilling aims to achieve the Final Storage Quality conditions as fast as possible. In particular, semi-aerobic landfill appears to represent an effective solution for use in the poorest economies due to lower management costs and shorter aftercare resulting from aerobic stabilisation of the waste. Nevertheless, the implementation of a semi-aerobic landfill in a tropical climate may affect the correct functioning of the plant: a lack of moisture during the dry season and heavy rainfalls during the wet season could negatively affect performance of both the degradation process, and of leachate and biogas management. This paper illustrates the results obtained through the experimentation of a potential dual-step management of semi-aerobic landfilling in a tropical climate in which composting process was reproduced during the dry season and subsequently flushing (high rainfall rate) during the wet period. Eight bioreactors specifically designed: four operated under anaerobic conditions and four under semi-aerobic conditions; half of the reactors were filled with high organic content waste, half with residual waste obtained following enhanced source segregation. The synergic effect of the subsequent phases (composting and flushing) in the semi-aerobic landfill was evaluated on the basis of both types of waste. Biogas production, leachate composition and waste stabilization were analysed during the trial and at the end of each step, and compared in view of the performance of anaerobic reactors. The results obtained underlined the effectiveness of the dual-step management evidencing how wastes reached a higher degree of stabilization and reference FSQ values for leachate were achieved over a one-year simulation period. Copyright © 2018 Elsevier Ltd. All rights reserved.
Intravital multiphoton imaging of mouse tibialis anterior muscle
Lau, Jasmine; Goh, Chi Ching; Devi, Sapna; Keeble, Jo; See, Peter; Ginhoux, Florent; Ng, Lai Guan
2016-01-01
ABSTRACT Intravital imaging by multiphoton microscopy is a powerful tool to gain invaluable insight into tissue biology and function. Here, we provide a step-by-step tissue preparation protocol for imaging the mouse tibialis anterior skeletal muscle. Additionally, we include steps for jugular vein catheterization that allow for well-controlled intravenous reagent delivery. Preparation of the tibialis anterior muscle is minimally invasive, reducing the chances of inducing damage and inflammation prior to imaging. The tibialis anterior muscle is useful for imaging leukocyte interaction with vascular endothelium, and to understand muscle contraction biology. Importantly, this model can be easily adapted to study neuromuscular diseases and myopathies. PMID:28243520
Minimum constitutive relation error based static identification of beams using force method
NASA Astrophysics Data System (ADS)
Guo, Jia; Takewaki, Izuru
2017-05-01
A new static identification approach based on the minimum constitutive relation error (CRE) principle for beam structures is introduced. The exact stiffness and the exact bending moment are shown to make the CRE minimal for given displacements to beam damages. A two-step substitution algorithm—a force-method step for the bending moment and a constitutive-relation step for the stiffness—is developed and its convergence is rigorously derived. Identifiability is further discussed and the stiffness in the undeformed region is found to be unidentifiable. An extra set of static measurements is complemented to remedy the drawback. Convergence and robustness are finally verified through numerical examples.
Managing human fallibility in critical aerospace situations
NASA Astrophysics Data System (ADS)
Tew, Larry
2014-11-01
Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.
The Impact of Different Degrees of Feedback on Physical Activity Levels: A 4-Week Intervention Study
Van Hoye, Karen; Boen, Filip; Lefevre, Johan
2015-01-01
Assessing levels of physical activity (PA) and providing feedback about these levels might have an effect on participant’s PA behavior. This study discusses the effect of different levels of feedback—from minimal to use of a feedback display and coach—on PA over a 4-week intervention period. PA was measured at baseline, during and immediately after the intervention. Participants (n = 227) were randomly assigned to a Minimal Intervention Group (MIG-no feedback), Pedometer Group (PG-feedback on steps taken), Display Group (DG-feedback on steps, minutes of moderate to vigorous physical activity and energy expenditure) or Coaching Group (CoachG-same as DG with need-supportive coaching). Two-way ANCOVA showed no significant Group × Time interaction effect for the different PA variables between the MIG and PG. Also no differences emerged between PG and DG. As hypothesized, CoachG had higher PA values throughout the intervention compared with DG. Self-monitoring using a pedometer resulted in more steps compared with a no-feedback condition at the start of the intervention. However, adding individualized coaching seems necessary to increase the PA level until the end of the intervention. PMID:26067990
Chang, Young-Hui; Auyang, Arick G.; Scholz, John P.; Nichols, T. Richard
2009-01-01
Summary Biomechanics and neurophysiology studies suggest whole limb function to be an important locomotor control parameter. Inverted pendulum and mass-spring models greatly reduce the complexity of the legs and predict the dynamics of locomotion, but do not address how numerous limb elements are coordinated to achieve such simple behavior. As a first step, we hypothesized whole limb kinematics were of primary importance and would be preferentially conserved over individual joint kinematics after neuromuscular injury. We used a well-established peripheral nerve injury model of cat ankle extensor muscles to generate two experimental injury groups with a predictable time course of temporary paralysis followed by complete muscle self-reinnervation. Mean trajectories of individual joint kinematics were altered as a result of deficits after injury. By contrast, mean trajectories of limb orientation and limb length remained largely invariant across all animals, even with paralyzed ankle extensor muscles, suggesting changes in mean joint angles were coordinated as part of a long-term compensation strategy to minimize change in whole limb kinematics. Furthermore, at each measurement stage (pre-injury, paralytic and self-reinnervated) step-by-step variance of individual joint kinematics was always significantly greater than that of limb orientation. Our results suggest joint angle combinations are coordinated and selected to stabilize whole limb kinematics against short-term natural step-by-step deviations as well as long-term, pathological deviations created by injury. This may represent a fundamental compensation principle allowing animals to adapt to changing conditions with minimal effect on overall locomotor function. PMID:19837893
Stability-maneuverability trade-offs during lateral steps.
Acasio, Julian; Wu, Mengnan/Mary; Fey, Nicholas P; Gordon, Keith E
2017-02-01
Selecting a specific foot placement strategy to perform walking maneuvers requires the management of several competing factors, including: maintaining stability, positioning oneself to actively generate impulses, and minimizing mechanical energy requirements. These requirements are unlikely to be independent. Our purpose was to determine the impact of lateral foot placement on stability, maneuverability, and energetics during walking maneuvers. Ten able-bodied adults performed laterally-directed walking maneuvers. Mediolateral placement of the "Push-off" foot during the maneuvers was varied, ranging from a cross-over step to a side-step. We hypothesized that as mediolateral foot placement became wider, passive stability in the direction of the maneuver, the lateral impulse generated to create the maneuver, and mechanical energy cost would all increase. We also hypothesized that subjects would prefer an intermediate step width reflective of trade-offs between stability vs. both maneuverability and energy. In support of our first hypothesis, we found that as Push-off step width increased, lateral margin of stability, peak lateral impulse, and total joint work all increased. In support of our second hypothesis, we found that when subjects had no restrictions on their mediolateral foot placement, they chose a foot placement between the two extreme positions. We found a significant relationship (p<0.05) between lateral margin of stability and peak lateral impulse (r=0.773), indicating a trade-off between passive stability and the force input required to maneuver. These findings suggest that during anticipated maneuvers people select foot placement strategies that balance competing costs to maintain stability, actively generate impulses, and minimize mechanical energy costs. Published by Elsevier B.V.
A Vision and Roadmap for Increasing User Autonomy in Flight Operations in the National Airspace
NASA Technical Reports Server (NTRS)
Cotton, William B.; Hilb, Robert; Koczo, Stefan; Wing, David
2016-01-01
The purpose of Air Transportation is to move people and cargo safely, efficiently and swiftly to their destinations. The companies and individuals who use aircraft for this purpose, the airspace users, desire to operate their aircraft according to a dynamically optimized business trajectory for their specific mission and operational business model. In current operations, the dynamic optimization of business trajectories is limited by constraints built into operations in the National Airspace System (NAS) for reasons of safety and operational needs of the air navigation service providers. NASA has been developing and testing means to overcome many of these constraints and permit operations to be conducted closer to the airspace user's changing business trajectory as conditions unfold before and during the flight. A roadmap of logical steps progressing toward increased user autonomy is proposed, beginning with NASA's Traffic Aware Strategic Aircrew Requests (TASAR) concept that enables flight crews to make informed, deconflicted flight-optimization requests to air traffic control. These steps include the use of data communications for route change requests and approvals, integration with time-based arrival flow management processes under development by the Federal Aviation Administration (FAA), increased user authority for defining and modifying downstream, strategic portions of the trajectory, and ultimately application of self-separation. This progression takes advantage of existing FAA NextGen programs and RTCA standards development, and it is designed to minimize the number of hardware upgrades required of airspace users to take advantage of these advanced capabilities to achieve dynamically optimized business trajectories in NAS operations. The roadmap is designed to provide operational benefits to first adopters so that investment decisions do not depend upon a large segment of the user community becoming equipped before benefits can be realized. The issues of equipment certification and operational approval of new procedures are addressed in a way that minimizes their impact on the transition by deferring a change in the assignment of separation responsibility until a large body of operational data is available to support the safety case for this change in the last roadmap step.This paper will relate the roadmap steps to ongoing activities to clarify the economics-based transition to these technologies for operational use.
Dombrovski, Viatcheslav V.; Driscoll, David I.; Shovkhet, Boris A.
2001-01-01
A superconducting electromechanical rotating (SER) device, such as a synchronous AC motor, includes a superconducting field winding and a one-layer stator winding that may be water-cooled. The stator winding is potted to a support such as the inner radial surface of a support structure and, accordingly, lacks hangers or other mechanical fasteners that otherwise would complicate stator assembly and require the provision of an unnecessarily large gap between adjacent stator coil sections. The one-layer winding topology, resulting in the number of coils being equal to half the number of slots or other mounting locations on the support structure, allows one to minimize or eliminate the gap between the inner radial ends of adjacent straight sections of the stator coilswhile maintaining the gap between the coil knuckles equal to at least the coil width, providing sufficient room for electrical and cooling element configurations and connections. The stator winding may be potted to the support structure or other support, for example, by a one-step VPI process relying on saturation of an absorbent material to fill large gaps in the stator winding or by a two-step process in which small gaps are first filled via a VPI or similar operation and larger gaps are then filled via an operation that utilizes the stator as a portion of an on-site mold.
Phase holograms in silver halide emulsions without a bleaching step
NASA Astrophysics Data System (ADS)
Belendez, Augusto; Madrigal, Roque F.; Pascual, Inmaculada V.; Fimia, Antonio
2000-03-01
Phase holograms in holographic emulsions are usually obtained by two bath processes (developing and bleaching). In this work we present a one step method to reach phase holograms with silver-halide emulsions. Which is based on the variation of the conditions of the typical developing processes of amplitude holograms. For this, we have used the well-known chemical developer, AAC, which is composed by ascorbic acid as a developing agent and sodium carbonate anhydrous as accelerator. Agfa 8E75 HD and BB-640 plates were used to obtain these phase gratings, whose colors are between yellow and brown. In function of the parameters of this developing method the resulting diffraction efficiency and optical density of the diffraction gratings were studied. One of these parameters studied is the influence of the grain size. In the case of Agfa plates diffraction efficiency around 18% with density < 1 has been reached, whilst with the BB-640 emulsion, whose grain is smaller than that of the Agfa, diffraction efficiency near 30% has been obtained. The resulting gratings were analyzed through X-ray spectroscopy showing the differences of the structure of the developed silver when amplitude and transmission gratings are obtained. The angular response of both (transmission and amplitude) gratings were studied, where minimal transmission is showed at the Braggs angle in phase holograms, whilst a maximal value is obtained in amplitude gratings.
Low-cost real-time automatic wheel classification system
NASA Astrophysics Data System (ADS)
Shabestari, Behrouz N.; Miller, John W. V.; Wedding, Victoria
1992-11-01
This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.
Report: Gold King Mine Release - Inspector General Response to Congressional Requests
Report #17-P-0250, June 12, 2017. Since causing the uncontrolled release of 3 million gallons of contaminated mine water, the EPA has taken steps to minimize the possibility of similar incidents at other mine sites.
Federal Tax Issues Raised by International Study Abroad Programs.
ERIC Educational Resources Information Center
Harding, Bertrand M., Jr.
2000-01-01
Identifies and describes tax issues raised by study abroad programs and suggests steps that a college or university can take to minimize or eliminate adverse U.S. and foreign tax exposure to both itself and its employees. (EV)
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
Kanazhevskaya, Lyubov Yu; Koval, Vladimir V; Vorobjev, Yury N; Fedorova, Olga S
2012-02-14
Apurinic/apyrimidinic (AP) sites are abundant DNA lesions arising from exposure to UV light, ionizing radiation, alkylating agents, and oxygen radicals. In human cells, AP endonuclease 1 (APE1) recognizes this mutagenic lesion and initiates its repair via a specific incision of the phosphodiester backbone 5' to the AP site. We have investigated a detailed mechanism of APE1 functioning using fluorescently labeled DNA substrates. A fluorescent adenine analogue, 2-aminopurine, was introduced into DNA substrates adjacent to the abasic site to serve as an on-site reporter of conformational transitions in DNA during the catalytic cycle. Application of a pre-steady-state stopped-flow technique allows us to observe changes in the fluorescence intensity corresponding to different stages of the process in real time. We also detected an intrinsic Trp fluorescence of the enzyme during interactions with 2-aPu-containing substrates. Our data have revealed a conformational flexibility of the abasic DNA being processed by APE1. Quantitative analysis of fluorescent traces has yielded a minimal kinetic scheme and appropriate rate constants consisting of four steps. The results obtained from stopped-flow data have shown a substantial influence of the 2-aPu base location on completion of certain reaction steps. Using detailed molecular dynamics simulations of the DNA substrates, we have attributed structural distortions of AP-DNA to realization of specific binding, effective locking, and incision of the damaged DNA. The findings allowed us to accurately discern the step that corresponds to insertion of specific APE1 amino acid residues into the abasic DNA void in the course of stabilization of the precatalytic complex.
Hybrid Imaging for Extended Depth of Field Microscopy
NASA Astrophysics Data System (ADS)
Zahreddine, Ramzi Nicholas
An inverse relationship exists in optical systems between the depth of field (DOF) and the minimum resolvable feature size. This trade-off is especially detrimental in high numerical aperture microscopy systems where resolution is pushed to the diffraction limit resulting in a DOF on the order of 500 nm. Many biological structures and processes of interest span over micron scales resulting in significant blurring during imaging. This thesis explores a two-step computational imaging technique known as hybrid imaging to create extended DOF (EDF) microscopy systems with minimal sacrifice in resolution. In the first step a mask is inserted at the pupil plane of the microscope to create a focus invariant system over 10 times the traditional DOF, albeit with reduced contrast. In the second step the contrast is restored via deconvolution. Several EDF pupil masks from the literature are quantitatively compared in the context of biological microscopy. From this analysis a new mask is proposed, the incoherently partitioned pupil with binary phase modulation (IPP-BPM), that combines the most advantageous properties from the literature. Total variation regularized deconvolution models are derived for the various noise conditions and detectors commonly used in biological microscopy. State of the art algorithms for efficiently solving the deconvolution problem are analyzed for speed, accuracy, and ease of use. The IPP-BPM mask is compared with the literature and shown to have the highest signal-to-noise ratio and lowest mean square error post-processing. A prototype of the IPP-BPM mask is fabricated using a combination of 3D femtosecond glass etching and standard lithography techniques. The mask is compared against theory and demonstrated in biological imaging applications.
Zhang, Patrick; Liang, Haijun; Jin, Zhen; ...
2017-11-01
We report phosphate beneficiation in Florida generates more than one tonne of phosphatic clay, or slime, per tonne of phosphate rock produced. Since the start of the practice of large-scale washing and desliming for phosphate beneficiation, more than 2 Gt of slime has accumulated, containing approximately 600 Mt of phosphate rock, 600 kt of rare earth elements (REEs) and 80 million kilograms of uranium. The recovery of these valuable elements from the phosphatic clay is one of the most challenging endeavors in mineral processing, because the clay is extremely dilute, with an average solids concentration of 3 percent, and finemore » in size, with more than 50 percent having particle size smaller than 2 μm, and it contains nearly 50 percent clay minerals as well as large amounts of magnesium, iron and aluminum. With industry support and under funding from the Critical Materials Institute, the Florida Industrial and Phosphate Research Institute in conjunction with the Oak Ridge National Laboratory undertook the task to recover phosphorus, rare earths and uranium from Florida phosphatic clay. This paper presents the results from the preliminary testing of two approaches. The first approach involves three-stage cycloning using cyclones with diameters of 12.4 cm (5 in.), 5.08 cm (2 in.) and 2.54 cm (1 in.), respectively, to remove clay minerals followed by flotation and leaching. The second approach is a two-step leaching process. In the first step, selective leaching was conducted to remove magnesium, thus allowing the production of phosphoric acid suitable for the manufacture of diammonium phosphate (DAP) in the second leaching step. The results showed that multistage cycloning with small cyclones is necessary to remove clay minerals. Finally, selective leaching at about pH 3.2 using sulfuric acid was found to be effective for removing more than 80 percent of magnesium from the feed with minimal loss of phosphorus.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Patrick; Liang, Haijun; Jin, Zhen
We report phosphate beneficiation in Florida generates more than one tonne of phosphatic clay, or slime, per tonne of phosphate rock produced. Since the start of the practice of large-scale washing and desliming for phosphate beneficiation, more than 2 Gt of slime has accumulated, containing approximately 600 Mt of phosphate rock, 600 kt of rare earth elements (REEs) and 80 million kilograms of uranium. The recovery of these valuable elements from the phosphatic clay is one of the most challenging endeavors in mineral processing, because the clay is extremely dilute, with an average solids concentration of 3 percent, and finemore » in size, with more than 50 percent having particle size smaller than 2 μm, and it contains nearly 50 percent clay minerals as well as large amounts of magnesium, iron and aluminum. With industry support and under funding from the Critical Materials Institute, the Florida Industrial and Phosphate Research Institute in conjunction with the Oak Ridge National Laboratory undertook the task to recover phosphorus, rare earths and uranium from Florida phosphatic clay. This paper presents the results from the preliminary testing of two approaches. The first approach involves three-stage cycloning using cyclones with diameters of 12.4 cm (5 in.), 5.08 cm (2 in.) and 2.54 cm (1 in.), respectively, to remove clay minerals followed by flotation and leaching. The second approach is a two-step leaching process. In the first step, selective leaching was conducted to remove magnesium, thus allowing the production of phosphoric acid suitable for the manufacture of diammonium phosphate (DAP) in the second leaching step. The results showed that multistage cycloning with small cyclones is necessary to remove clay minerals. Finally, selective leaching at about pH 3.2 using sulfuric acid was found to be effective for removing more than 80 percent of magnesium from the feed with minimal loss of phosphorus.« less
Real-time inverse planning for Gamma Knife radiosurgery.
Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J
2003-11-01
The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.
Evaluation of two methods to determine glyphosate and AMPA in soils of Argentina
NASA Astrophysics Data System (ADS)
De Geronimo, Eduardo; Lorenzon, Claudio; Iwasita, Barbara; Faggioli, Valeria; Aparicio, Virginia; Costa, Jose Luis
2017-04-01
Argentine agricultural production is fundamentally based on a technological package combining no-tillage and the dependence of glyphosate applications to control weeds in transgenic crops (soybean, maize and cotton). Therefore, glyphosate is the most employed herbicide in the country, where 180 to 200 million liters are applied every year. Due to its widespread use, it is important to assess its impact on the environment and, therefore, reliable analytical methods are mandatory. Glyphosate molecule exhibits unique physical and chemical characteristics which difficult its quantification, especially in soils with high organic matter content, such as the central eastern Argentine soils, where strong interferences are normally observed. The objective of this work was to compare two methods for extraction and quantification of glyphosate and AMPA in samples of 8 representative soils of Argentina. The first analytical method (method 1) was based on the use of phosphate buffer as extracting solution and dichloromethane to minimize matrix organic content. In the second method (method 2), potassium hydroxide was used to extract the analytes followed by a clean-up step using solid phase extraction (SPE) to minimize strong interferences. Sensitivity, recoveries, matrix effects and robustness were evaluated. Both methodologies involved a derivatization with 9-fluorenyl-methyl-chloroformate (FMOC) in borate buffer and detection based on ultra-high-pressure liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS). Recoveries obtained from soil samples spiked at 0.1 and 1 mg kg-1 and were satisfactory in both methods (70% - 120%). However, there was a remarkable difference regarding the matrix effect, being the SPE clean-up step (method 2) insufficient to remove the interferences. Whereas the dilution and the clean-up with dichloromethane (method 1) were more effective minimizing the ionic suppression. Moreover, method 1 had fewer steps in the protocol of sample processing than method 2. This can be highly valuable in the routine lab work due to the reduction of potential undesired errors such as the loss of analyte or sample contamination. In addition, the substitution of SPE by another alternative involved a considerable reduction of analytical costs in method 1. We conclude that method 1 seemed to be simpler and cheaper than method 2, as well as reliable to quantify glyphosate in Argentinean soils. We hope that this experience can be useful to simplify the protocols of glyphosate quantification and contribute to the understanding of the fate of this herbicide in the environment.
Deducing the Kinetics of Protein Synthesis In Vivo from the Transition Rates Measured In Vitro
Rudorf, Sophia; Thommen, Michael; Rodnina, Marina V.; Lipowsky, Reinhard
2014-01-01
The molecular machinery of life relies on complex multistep processes that involve numerous individual transitions, such as molecular association and dissociation steps, chemical reactions, and mechanical movements. The corresponding transition rates can be typically measured in vitro but not in vivo. Here, we develop a general method to deduce the in-vivo rates from their in-vitro values. The method has two basic components. First, we introduce the kinetic distance, a new concept by which we can quantitatively compare the kinetics of a multistep process in different environments. The kinetic distance depends logarithmically on the transition rates and can be interpreted in terms of the underlying free energy barriers. Second, we minimize the kinetic distance between the in-vitro and the in-vivo process, imposing the constraint that the deduced rates reproduce a known global property such as the overall in-vivo speed. In order to demonstrate the predictive power of our method, we apply it to protein synthesis by ribosomes, a key process of gene expression. We describe the latter process by a codon-specific Markov model with three reaction pathways, corresponding to the initial binding of cognate, near-cognate, and non-cognate tRNA, for which we determine all individual transition rates in vitro. We then predict the in-vivo rates by the constrained minimization procedure and validate these rates by three independent sets of in-vivo data, obtained for codon-dependent translation speeds, codon-specific translation dynamics, and missense error frequencies. In all cases, we find good agreement between theory and experiment without adjusting any fit parameter. The deduced in-vivo rates lead to smaller error frequencies than the known in-vitro rates, primarily by an improved initial selection of tRNA. The method introduced here is relatively simple from a computational point of view and can be applied to any biomolecular process, for which we have detailed information about the in-vitro kinetics. PMID:25358034
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
Redox and Reactive Oxygen Species Regulation of Mitochondrial Cytochrome c Oxidase Biogenesis
Bourens, Myriam; Fontanesi, Flavia; Soto, Iliana C.; Liu, Jingjing
2013-01-01
Abstract Significance: Cytochrome c oxidase (COX), the last enzyme of the mitochondrial respiratory chain, is the major oxygen consumer enzyme in the cell. COX biogenesis involves several redox-regulated steps. The process is highly regulated to prevent the formation of pro-oxidant intermediates. Recent Advances: Regulation of COX assembly involves several reactive oxygen species and redox-regulated steps. These include: (i) Intricate redox-controlled machineries coordinate the expression of COX isoenzymes depending on the environmental oxygen concentration. (ii) COX is a heme A-copper metalloenzyme. COX copper metallation involves the copper chaperone Cox17 and several other recently described cysteine-rich proteins, which are oxidatively folded in the mitochondrial intermembrane space. Copper transfer to COX subunits 1 and 2 requires concomitant transfer of redox power. (iii) To avoid the accumulation of reactive assembly intermediates, COX is regulated at the translational level to minimize synthesis of the heme A-containing Cox1 subunit when assembly is impaired. Critical Issues: An increasing number of regulatory pathways converge to facilitate efficient COX assembly, thus preventing oxidative stress. Future Directions: Here we will review on the redox-regulated COX biogenesis steps and will discuss their physiological relevance. Forthcoming insights into the precise regulation of mitochondrial COX biogenesis in normal and stress conditions will likely open future perspectives for understanding mitochondrial redox regulation and prevention of oxidative stress. Antioxid. Redox Signal. 19, 1940–1952. PMID:22937827
A feature refinement approach for statistical interior CT reconstruction
NASA Astrophysics Data System (ADS)
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-01
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
Trimming Line Design using New Development Method and One Step FEM
NASA Astrophysics Data System (ADS)
Chung, Wan-Jin; Park, Choon-Dal; Yang, Dong-yol
2005-08-01
In most of automobile panel manufacturing, trimming is generally performed prior to flanging. To find feasible trimming line is crucial in obtaining accurate edge profile after flanging. Section-based method develops blank along section planes and find trimming line by generating loop of end points. This method suffers from inaccurate results for regions with out-of-section motion. On the other hand, simulation-based method can produce more accurate trimming line by iterative strategy. However, due to limitation of time and lack of information in initial die design, it is still not widely accepted in the industry. In this study, new fast method to find feasible trimming line is proposed. One step FEM is used to analyze the flanging process because we can define the desired final shape after flanging and most of strain paths are simple in flanging. When we use one step FEM, the main obstacle is the generation of initial guess. Robust initial guess generation method is developed to handle bad-shaped mesh, very different mesh size and undercut part. The new method develops 3D triangular mesh in propagational way from final mesh onto the drawing tool surface. Also in order to remedy mesh distortion during development, energy minimization technique is utilized. Trimming line is extracted from the outer boundary after one step FEM simulation. This method shows many benefits since trimming line can be obtained in the early design stage. The developed method is successfully applied to the complex industrial applications such as flanging of fender and door outer.
Endoclip Magnetic Resonance Imaging Screening: A Local Practice Review.
Accorsi, Fabio; Lalonde, Alain; Leswick, David A
2018-05-01
Not all endoscopically placed clips (endoclips) are magnetic resonance imaging (MRI) compatible. At many institutions, endoclip screening is part of the pre-MRI screening process. Our objective is to determine the contribution of each step of this endoclip screening protocol in determining a patient's endoclip status at our institution. A retrospective review of patients' endoscopic histories on general MRI screening forms for patients scanned during a 40-day period was performed to assess the percentage of patients that require endoclip screening at our institution. Following this, a prospective evaluation of 614 patients' endoclip screening determined the percentage of these patients ultimately exposed to each step in the protocol (exposure), and the percentage of patients whose endoclip status was determined with reasonable certainty by each step (determination). Exposure and determination values for each step were calculated as follows (exposure, determination): verbal interview (100%, 86%), review of past available imaging (14%, 36%), review of endoscopy report (9%, 57%), and new abdominal radiograph (4%, 96%), or CT (0.2%, 100%) for evaluation of potential endoclips. Only 1 patient did not receive MRI because of screening (in situ gastrointestinal endoclip identified). Verbal interview is invaluable to endoclip screening, clearing 86% of patients with minimal monetary and time investment. Conversely, the limited availability of endoscopy reports and relevant past imaging somewhat restricts the determination rates of these. New imaging (radiograph or computed tomography) is required <5% of the time, and although costly and associated with patient irradiation, has excellent determination rates (above 96%) when needed. Copyright © 2017 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
Focal cryotherapy: step by step technique description
Redondo, Cristina; Srougi, Victor; da Costa, José Batista; Baghdad, Mohammed; Velilla, Guillermo; Nunes-Silva, Igor; Bergerat, Sebastien; Garcia-Barreras, Silvia; Rozet, François; Ingels, Alexandre; Galiano, Marc; Sanchez-Salas, Rafael; Barret, Eric; Cathelineau, Xavier
2017-01-01
ABSTRACT Introduction and objective: Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa). The purpose of this video is to describe the procedure step by step. Materials and methods: We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. Results: The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipment utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40°C) to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1–5). Conclusions: Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment. PMID:28727387
Focal cryotherapy: step by step technique description.
Redondo, Cristina; Srougi, Victor; da Costa, José Batista; Baghdad, Mohammed; Velilla, Guillermo; Nunes-Silva, Igor; Bergerat, Sebastien; Garcia-Barreras, Silvia; Rozet, François; Ingels, Alexandre; Galiano, Marc; Sanchez-Salas, Rafael; Barret, Eric; Cathelineau, Xavier
2017-01-01
Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa). The purpose of this video is to describe the procedure step by step. We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipament utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40ºC) to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1-5). Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment. Copyright® by the International Brazilian Journal of Urology.
Impact of enhanced sensory input on treadmill step frequency: infants born with myelomeningocele.
Pantall, Annette; Teulier, Caroline; Smith, Beth A; Moerchen, Victoria; Ulrich, Beverly D
2011-01-01
To determine the effect of enhanced sensory input on the step frequency of infants with myelomeningocele (MMC) when supported on a motorized treadmill. Twenty-seven infants aged 2 to 10 months with MMC lesions at, or caudal to, L1 participated. We supported infants upright on the treadmill for 2 sets of 6 trials, each 30 seconds long. Enhanced sensory inputs within each set were presented in random order and included baseline, visual flow, unloading, weights, Velcro, and friction. Overall friction and visual flow significantly increased step rate, particularly for the older subjects. Friction and Velcro increased stance-phase duration. Enhanced sensory input had minimal effect on leg activity when infants were not stepping. : Increased friction via Dycem and enhancing visual flow via a checkerboard pattern on the treadmill belt appear to be more effective than the traditional smooth black belt surface for eliciting stepping patterns in infants with MMC.
Impact of Enhanced Sensory Input on Treadmill Step Frequency: Infants Born With Myelomeningocele
Pantall, Annette; Teulier, Caroline; Smith, Beth A; Moerchen, Victoria; Ulrich, Beverly D.
2012-01-01
Purpose To determine the effect of enhanced sensory input on the step frequency of infants with myelomeningocele (MMC) when supported on a motorized treadmill. Methods Twenty seven infants aged 2 to 10 months with MMC lesions at or caudal to L1 participated. We supported infants upright on the treadmill for 2 sets of 6 trials, each 30s long. Enhanced sensory inputs within each set were presented in random order and included: baseline, visual flow, unloading, weights, Velcro and friction. Results Overall friction and visual flow significantly increased step rate, particularly for the older group. Friction and Velcro increased stance phase duration. Enhanced sensory input had minimal effect on leg activity when infants were not stepping. Conclusions Increased friction via Dycem and enhancing visual flow via a checkerboard pattern on the treadmill belt appear more effective than the traditional smooth black belt surface for eliciting stepping patterns in infants with MMC. PMID:21266940
Ratcliffe, Elizabeth; Hourd, Paul; Guijarro-Leach, Juan; Rayment, Erin; Williams, David J; Thomas, Robert J
2013-01-01
Commercial regenerative medicine will require large quantities of clinical-specification human cells. The cost and quality of manufacture is notoriously difficult to control due to highly complex processes with poorly defined tolerances. As a step to overcome this, we aimed to demonstrate the use of 'quality-by-design' tools to define the operating space for economic passage of a scalable human embryonic stem cell production method with minimal cell loss. Design of experiments response surface methodology was applied to generate empirical models to predict optimal operating conditions for a unit of manufacture of a previously developed automatable and scalable human embryonic stem cell production method. Two models were defined to predict cell yield and cell recovery rate postpassage, in terms of the predictor variables of media volume, cell seeding density, media exchange and length of passage. Predicted operating conditions for maximized productivity were successfully validated. Such 'quality-by-design' type approaches to process design and optimization will be essential to reduce the risk of product failure and patient harm, and to build regulatory confidence in cell therapy manufacturing processes.
Stent manufacturing using cobalt chromium molybdenum (CoCrMo) by selective laser melting technology
NASA Astrophysics Data System (ADS)
Omar, Mohd Asnawi; Baharudin, BT-HT; Sulaiman, S.
2017-12-01
This paper reviews the capabilities of additive manufacturing (AM) technology and the use of Cobalt super alloy stent fabrication by looking at the dimensional accuracy and mechanical properties of the stent. Current conventional process exhibit many processes which affect the supply chain, costing, and post processing. By alternatively switching to AM, the step of production can be minimized and thus customization of stent can be carried out according to patients need. The proposed methodology is a perfect choice as surgeons need to have an accurate size during stent implantation. It also is able to reduce time-to-market delivery in a matter of hours and from days. The suggested stent model was taken from the third party vendor and flow optimization was carried out using Materialise Magics TM software. By using SLM125TM printer, the printing parameters such as Energy Density (DE), Laser Power (PL), Scanning Speed (SS) and Hatching Distance (DH) was used to print the stent. The properties of the finished product, such as strength, surface finish and orientation was investigated.
XFEL diffraction: Developing processing methods to optimize data quality
Sauter, Nicholas K.
2015-01-29
Serial crystallography, using either femtosecond X-ray pulses from free-electron laser sources or short synchrotron-radiation exposures, has the potential to reveal metalloprotein structural details while minimizing damage processes. However, deriving a self-consistent set of Bragg intensities from numerous still-crystal exposures remains a difficult problem, with optimal protocols likely to be quite different from those well established for rotation photography. Here several data processing issues unique to serial crystallography are examined. It is found that the limiting resolution differs for each shot, an effect that is likely to be due to both the sample heterogeneity and pulse-to-pulse variation in experimental conditions. Shotsmore » with lower resolution limits produce lower-quality models for predicting Bragg spot positions during the integration step. Also, still shots by their nature record only partial measurements of the Bragg intensity. An approximate model that corrects to the full-spot equivalent (with the simplifying assumption that the X-rays are monochromatic) brings the distribution of intensities closer to that expected from an ideal crystal, and improves the sharpness of anomalous difference Fourier peaks indicating metal positions.« less
Design for validation: An approach to systems validation
NASA Technical Reports Server (NTRS)
Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)
1989-01-01
Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.
Mackay, Stephen; Gomes, Eduardo; Holliger, Christof; Bauer, Rolene; Schwitzguébel, Jean-Paul
2015-06-01
Despite recent advances in down-stream processing, production of microalgae remains substantially limited because of economical reasons. Harvesting and dewatering are the most energy-intensive processing steps in their production and contribute 20-30% of total operational cost. Bio-flocculation of microalgae by co-cultivation with filamentous fungi relies on the development of large structures that facilitate cost effective harvesting. A yet unknown filamentous fungus was isolated as a contaminant from a microalgal culture and identified as Isaria fumosorosea. Blastospores production was optimized in minimal medium and the development of pellets, possibly lichens, was followed when co-cultured with Chlorella sorokiniana under strict autotrophic conditions. Stable pellets (1-2mm) formed rapidly at pH 7-8, clearing the medium of free algal cells. Biomass was harvested with large inexpensive filters, generating wet slurry suitable for hydrothermal gasification. Nutrient rich brine from the aqueous phase of hydrothermal gasification supported growth of the fungus and may increase the process sustainability. Copyright © 2015 Elsevier Ltd. All rights reserved.
Generalized trajectory surface-hopping method for internal conversion and intersystem crossing
NASA Astrophysics Data System (ADS)
Cui, Ganglong; Thiel, Walter
2014-09-01
Trajectory-based fewest-switches surface-hopping (FSSH) dynamics simulations have become a popular and reliable theoretical tool to simulate nonadiabatic photophysical and photochemical processes. Most available FSSH methods model internal conversion. We present a generalized trajectory surface-hopping (GTSH) method for simulating both internal conversion and intersystem crossing processes on an equal footing. We consider hops between adiabatic eigenstates of the non-relativistic electronic Hamiltonian (pure spin states), which is appropriate for sufficiently small spin-orbit coupling. This choice allows us to make maximum use of existing electronic structure programs and to minimize the changes to available implementations of the traditional FSSH method. The GTSH method is formulated within the quantum mechanics (QM)/molecular mechanics framework, but can of course also be applied at the pure QM level. The algorithm implemented in the GTSH code is specified step by step. As an initial GTSH application, we report simulations of the nonadiabatic processes in the lowest four electronic states (S0, S1, T1, and T2) of acrolein both in vacuo and in acetonitrile solution, in which the acrolein molecule is treated at the ab initio complete-active-space self-consistent-field level. These dynamics simulations provide detailed mechanistic insight by identifying and characterizing two nonadiabatic routes to the lowest triplet state, namely, direct S1 → T1 hopping as major pathway and sequential S1 → T2 → T1 hopping as minor pathway, with the T2 state acting as a relay state. They illustrate the potential of the GTSH approach to explore photoinduced processes in complex systems, in which intersystem crossing plays an important role.
MO-E-BRE-01: Determination, Minimization and Communication of Uncertainties in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Dyk, J; Palta, J; Bortfeld, T
2014-06-15
Medical Physicists have a general understanding of uncertainties in the radiation treatment process, both with respect to dosimetry and geometry. However, there is a desire to be more quantitative about uncertainty estimation. A recent International Atomic Energy Agency (IAEA) report (about to be published) recommends that we should be as “accurate as reasonably achievable, technical and biological factors being taken into account”. Thus, a single recommendation as a goal for accuracy in radiation therapy is an oversimplification. That report also suggests that individual clinics should determine their own level of uncertainties for their specific treatment protocols. The question is “howmore » do we implement this in clinical practice”? AAPM Monograph 35 (2011 AAPM Summer School) addressed many specific aspects of uncertainties in each of the steps of a course of radiation treatment. The intent of this symposium is: (1) to review uncertainty considerations in the entire radiation treatment process including uncertainty determination for each step and uncertainty propagation for the total process, (2) to consider aspects of robust optimization which optimizes treatment plans while protecting them against uncertainties, and (3) to describe various methods of displaying uncertainties and communicating uncertainties to the relevant professionals. While the theoretical and research aspects will also be described, the emphasis will be on the practical considerations for the medical physicist in clinical practice. Learning Objectives: To review uncertainty determination in the overall radiation treatment process. To consider uncertainty modeling and uncertainty propagation. To highlight the basic ideas and clinical potential of robust optimization procedures to generate optimal treatment plans that are not severely affected by uncertainties. To describe methods of uncertainty communication and display.« less
NASA Astrophysics Data System (ADS)
Bouillard, Jacques X.; Vignes, Alexis
2014-02-01
In this paper, an inhalation health and explosion safety risk assessment methodology for nanopowders is described. Since toxicological threshold limit values are still unknown for nanosized substances, detailed risk assessment on specific plants may not be carried out. A simple approach based on occupational hazard/exposure band expressed in mass concentrations is proposed for nanopowders. This approach is consolidated with an iso surface toxicological scaling method, which has the merit, although incomplete, to provide concentration threshold levels for which new metrological instruments should be developed for proper air monitoring in order to ensure safety. Whenever the processing or use of nanomaterials is introducing a risk to the worker, a specific nano pictogram is proposed to inform the worker. Examples of risk assessment of process equipment (i.e., containment valves) processing various nanomaterials are provided. Explosion risks related to very reactive nanomaterials such as aluminum nanopowders can be assessed using this new analysis methodology adapted to nanopowders. It is nevertheless found that to formalize and extend this approach, it is absolutely necessary to develop new relevant standard apparatuses and to qualify individual and collective safety barriers with respect to health and explosion risks. In spite of these uncertainties, it appears, as shown in the second paper (Part II) that health and explosion risks, evaluated for given MWCNTs and aluminum nanoparticles, remain manageable in their continuous fabrication mode, considering current individual and collective safety barriers that can be put in place. The authors would, however, underline that peculiar attention must be paid to non-continuous modes of operations, such as process equipment cleaning steps, that are often under-analyzed and are too often forgotten critical steps needing vigilance in order to minimize potential toxic and explosion risks.
Germanium photodetectors fabricated on 300 mm silicon wafers for near-infrared focal plane arrays
NASA Astrophysics Data System (ADS)
Zeller, John W.; Rouse, Caitlin; Efstathiadis, Harry; Dhar, Nibir K.; Wijewarnasuriya, Priyalal; Sood, Ashok K.
2017-09-01
SiGe p-i-n photodetectors have been fabricated on 300 mm (12") diameter silicon (Si) wafers utilizing high throughput, large-area complementary metal-oxide semiconductor (CMOS) technologies. These Ge photodetectors are designed to operate in room temperature environments without cooling, and thus have potential size and cost advantages over conventional cooled infrared detectors. The two-step fabrication process for the p-i-n photodetector devices, designed to minimize the formation of defects and threading dislocations, involves low temperature epitaxial growth of a thin p+ (boron) Ge seed/buffer layer, followed by higher temperature deposition of a thicker Ge intrinsic layer. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) demonstrated uniform layer compositions with well defined layer interfaces and reduced dislocation density. Time-of-flight secondary ion mass spectroscopy (TOF-SIMS) was likewise employed to analyze the doping levels of the p+ and n+ layers. Current-voltage (I-V) measurements demonstrated that these SiGe photodetectors, when exposed to incident visible-NIR radiation, exhibited dark currents down below 1 μA and significant enhancement in photocurrent at -1 V. The zero-bias photocurrent was also relatively high, showing a minimal drop compared to that at -1 V bias.
PELE web server: atomistic study of biomolecular systems at your fingertips.
Madadkar-Sobhani, Armin; Guallar, Victor
2013-07-01
PELE, Protein Energy Landscape Exploration, our novel technology based on protein structure prediction algorithms and a Monte Carlo sampling, is capable of modelling the all-atom protein-ligand dynamical interactions in an efficient and fast manner, with two orders of magnitude reduced computational cost when compared with traditional molecular dynamics techniques. PELE's heuristic approach generates trial moves based on protein and ligand perturbations followed by side chain sampling and global/local minimization. The collection of accepted steps forms a stochastic trajectory. Furthermore, several processors may be run in parallel towards a collective goal or defining several independent trajectories; the whole procedure has been parallelized using the Message Passing Interface. Here, we introduce the PELE web server, designed to make the whole process of running simulations easier and more practical by minimizing input file demand, providing user-friendly interface and producing abstract outputs (e.g. interactive graphs and tables). The web server has been implemented in C++ using Wt (http://www.webtoolkit.eu) and MySQL (http://www.mysql.com). The PELE web server, accessible at http://pele.bsc.es, is free and open to all users with no login requirement.
A computer-controlled apparatus for micrometric drop deposition at liquid surfaces
NASA Astrophysics Data System (ADS)
Peña-Polo, Franklin; Trujillo, Leonardo; Sigalotti, Leonardo Di G.
2010-05-01
A low-cost, automated apparatus has been used to perform micrometric deposition of small pendant drops onto a quiet liquid surface. The approach of the drop to the surface is obtained by means of discrete, micron-scale translations in order to achieve deposition at adiabatically zero velocity. This process is not only widely used in scientific investigations in fluid mechanics and thermal sciences but also in engineering and biomedical applications. The apparatus has been designed to produce accurate deposition onto the surface and minimize the vibrations induced in the drop by the movement of the capillary tip. Calibration tests of the apparatus have shown that a descent of the drop by discrete translational steps of ˜5.6 μm and duration of 150-200 ms is sufficient to minimize its penetration depth into the liquid when it touches the surface layer and reduce to a level of noise the vibrations transmitted to it by the translation of the dispenser. Different settings of the experimental setup can be easily implemented for use in a variety of other applications, including deposition onto solid surfaces, surface tension measurements of pendant drops, and wire bonding in microelectronics.
NASA Astrophysics Data System (ADS)
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
Ejupi, Andreas; Gschwind, Yves J; Brodie, Matthew; Zagler, Wolfgang L; Lord, Stephen R; Delbaere, Kim
2016-01-01
Quick protective reactions such as reaching or stepping are important to avoid a fall or minimize injuries. We developed Kinect-based choice reaching and stepping reaction time tests (Kinect-based CRTs) and evaluated their ability to differentiate between older fallers and non-fallers and the feasibility of administering them at home. A total of 94 community-dwelling older people were assessed on the Kinect-based CRTs in the laboratory and were followed-up for falls for 6 months. Additionally, a subgroup (n = 20) conducted the Kinect-based CRTs at home. Signal processing algorithms were developed to extract features for reaction, movement and the total time from the Kinect skeleton data. Nineteen participants (20.2 %) reported a fall in the 6 months following the assessment. The reaction time (fallers: 797 ± 136 ms, non-fallers: 714 ± 89 ms), movement time (fallers: 392 ± 50 ms, non-fallers: 358 ± 51 ms) and total time (fallers: 1189 ± 170 ms, non-fallers: 1072 ± 109 ms) of the reaching reaction time test differentiated well between the fallers and non-fallers. The stepping reaction time test did not significantly discriminate between the two groups in the prospective study. The correlations between the laboratory and in-home assessments were 0.689 for the reaching reaction time and 0.860 for stepping reaction time. The study findings indicate that the Kinect-based CRT tests are feasible to administer in clinical and in-home settings, and thus represents an important step towards the development of sensor-based fall risk self-assessments. With further validation, the assessments may prove useful as a fall risk screen and home-based assessment measures for monitoring changes over time and effects of fall prevention interventions.
Solving the infeasible trust-region problem using approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott
2004-07-01
The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less
Nuclear Weapons and the Future: An "Unthinkable" Proposal.
ERIC Educational Resources Information Center
Tyler, Robert L.
1982-01-01
The author looks ahead 30 or 40 years to see what might come of the nuclear weapons predicament. As a minimal first step in the campaign against nuclear warfare, he suggests a unilateral and complete disarmament by the United States. (AM)