Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection
NASA Astrophysics Data System (ADS)
Brasche, L. J. H.; Lopez, R.; Eisenmann, D.
2006-03-01
Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.
Parallel workflow tools to facilitate human brain MRI post-processing
Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang
2015-01-01
Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043
Conceptual Transformation and Cognitive Processes in Origami Paper Folding
ERIC Educational Resources Information Center
Tenbrink, Thora; Taylor, Holly A.
2015-01-01
Research on problem solving typically does not address tasks that involve following detailed and/or illustrated step-by-step instructions. Such tasks are not seen as cognitively challenging problems to be solved. In this paper, we challenge this assumption by analyzing verbal protocols collected during an Origami folding task. Participants…
Actual operation and regulatory activities on steam generator replacement in Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saeki, Hitoshi
1997-02-01
This paper summarizes the operating reactors in Japan, and the status of the steam generators in these plants. It reviews plans for replacement of existing steam generators, and then goes into more detail on the planning and regulatory steps which must be addressed in the process of accomplishing this maintenance. The paper also reviews the typical steps involved in the process of removal and replacement of steam generators.
Supercritical Fluid Spray Application Process for Adhesives and Primers
2003-03-01
The basic scheme of SFE process consists of three steps. A solvent, typically carbon dioxide, first is heated and pressurized to a supercritical...passivation step to remove contaminants and to prevent recontamination. Bok et al. (25) describe a pressure pulsation mechanism to stimulate improved...in as a liquid, and then it is heated to above its critical temperature to become a supercritical fluid. The sample is injected and dissolved into
NASA Technical Reports Server (NTRS)
Cerbins, F. C.; Huysman, B. P.; Knoedler, J. K.; Kwong, P. S.; Pieniazek, L. A.; Strom, S. W.
1986-01-01
This manual describes the operation and use of RELBET 4.0 implemented on the Hewlett Packard model 9000. The RELBET System is an integrated collection of computer programs which support the analysis and post-flight reconstruction of vehicle to vehicle relative trajectories of two on-orbit free-flying vehicles: the Space Shuttle Orbiter and some other free-flyer. The manual serves both as a reference and as a training guide. Appendices provide experienced users with details and full explanations of program usage. The body of the manual introduces new users to the system by leading them through a step by step example of a typical production. This should equip the new user both to execute a typical production process and to understand the most significant variables in that process.
The Automated Geospatial Watershed Assessment (AGWA) Urban tool provides a step-by-step process to model subdivisions using the KINEROS2 model, with and without Green Infrastructure (GI) practices. AGWA utilizes the Kinematic Runoff and Erosion (KINEROS2) model, an event driven, ...
A mechanism for leader stepping
NASA Astrophysics Data System (ADS)
Ebert, U.; Carlson, B. E.; Koehn, C.
2013-12-01
The stepping of negative leaders is well observed, but not well understood. A major problem consists of the fact that the streamer corona is typically invisible within a thunderstorm, but determines the evolution of a leader. Motivated by recent observations of streamer and leader formation in the laboratory by T.M.P. Briels, S. Nijdam, P. Kochkin, A.P.J. van Deursen et al., by recent simulations of these processes by J. Teunissen, A. Sun et al., and by our theoretical understanding of the process, we suggest how laboratory phenomena can be extrapolated to lightning leaders to explain the stepping mechanism.
Data-based control of a multi-step forming process
NASA Astrophysics Data System (ADS)
Schulte, R.; Frey, P.; Hildenbrand, P.; Vogel, M.; Betz, C.; Lechner, M.; Merklein, M.
2017-09-01
The fourth industrial revolution represents a new stage in the organization and management of the entire value chain. However, concerning the field of forming technology, the fourth industrial revolution has only arrived gradually until now. In order to make a valuable contribution to the digital factory the controlling of a multistage forming process was investigated. Within the framework of the investigation, an abstracted and transferable model is used to outline which data have to be collected, how an interface between the different forming machines can be designed tangible and which control tasks must be fulfilled. The goal of this investigation was to control the subsequent process step based on the data recorded in the first step. The investigated process chain links various metal forming processes, which are typical elements of a multi-step forming process. Data recorded in the first step of the process chain is analyzed and processed for an improved process control of the subsequent process. On the basis of the gained scientific knowledge, it is possible to make forming operations more robust and at the same time more flexible, and thus create the fundament for linking various production processes in an efficient way.
Small Craft Advisory!: Cardboard Boat Challenges Students' Research, Design and Construction Skills
ERIC Educational Resources Information Center
Griffis, Kurt; Brand, Lance; Shackelford, Ray
2006-01-01
Throughout history, people have moved themselves and cargo across water in boats and other types of vessels. Most vessels are developed using a technological design process, which typically involves problem solving and a series of steps. The designer documents each step to provide an accurate record of accomplishments and information to guide…
How Long is my Toilet Roll?--A Simple Exercise in Mathematical Modelling
ERIC Educational Resources Information Center
Johnston, Peter R.
2013-01-01
The simple question of how much paper is left on my toilet roll is studied from a mathematical modelling perspective. As is typical with applied mathematics, models of increasing complexity are introduced and solved. Solutions produced at each step are compared with the solution from the previous step. This process exposes students to the typical…
Steinebach, Fabian; Müller-Späth, Thomas; Morbidelli, Massimo
2016-09-01
The economic advantages of continuous processing of biopharmaceuticals, which include smaller equipment and faster, efficient processes, have increased interest in this technology over the past decade. Continuous processes can also improve quality assurance and enable greater controllability, consistent with the quality initiatives of the FDA. Here, we discuss different continuous multi-column chromatography processes. Differences in the capture and polishing steps result in two different types of continuous processes that employ counter-current column movement. Continuous-capture processes are associated with increased productivity per cycle and decreased buffer consumption, whereas the typical purity-yield trade-off of classical batch chromatography can be surmounted by continuous processes for polishing applications. In the context of continuous manufacturing, different but complementary chromatographic columns or devices are typically combined to improve overall process performance and avoid unnecessary product storage. In the following, these various processes, their performances compared with batch processing and resulting product quality are discussed based on a review of the literature. Based on various examples of applications, primarily monoclonal antibody production processes, conclusions are drawn about the future of these continuous-manufacturing technologies. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Processing of fibre suspensions at ultra-high consistencies
Daniel F. Caulfield; Rodney E. Jacobson
2004-01-01
Typically the paper physicist considers pulp suspensions greater than 0.5% consistency as high consistency. In our research on cellulose fibre- reinforced engineering plastics we have had to develop a two-step method for the processing of fibers suspensions at ultrahigh consistencies (consistencies greater than 30%).
Powdered hide model for vegetable tanning
USDA-ARS?s Scientific Manuscript database
Powdered hide samples for this initial study of vegetable tanning were prepared from hides that were dehaired by a typical sulfide or oxidative process, and carried through the delime/bate step of a tanning process. In this study, we report on interactions of the vegetable tannin, quebracho with th...
Allouche, Joachim; Dupin, Jean-Charles; Gonbeau, Danielle
2011-07-14
Silica core-shell nanoparticles with a MSU shell have been synthesized using several non-ionic poly(ethylene oxide) based surfactants via a two step sol-gel method. The materials exhibit a typical worm-hole pore structure and tunable pore diameters between 2.4 nm and 5.8 nm.
NASA Astrophysics Data System (ADS)
Preuss, R.
2014-12-01
This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system
33 CFR 385.11 - Implementation process for projects.
Code of Federal Regulations, 2010 CFR
2010-07-01
... figure 1 in Appendix A of this part. Typical steps in this process involve: (a) Project Management Plan. The Project Management Plan describes the activities, tasks, and responsibilities that will be used to... effectiveness of the project and to provide information that will be used for the adaptive management program. ...
33 CFR 385.11 - Implementation process for projects.
Code of Federal Regulations, 2011 CFR
2011-07-01
... figure 1 in Appendix A of this part. Typical steps in this process involve: (a) Project Management Plan. The Project Management Plan describes the activities, tasks, and responsibilities that will be used to... effectiveness of the project and to provide information that will be used for the adaptive management program. ...
33 CFR 385.11 - Implementation process for projects.
Code of Federal Regulations, 2014 CFR
2014-07-01
... figure 1 in appendix A of this part. Typical steps in this process involve: (a) Project Management Plan. The Project Management Plan describes the activities, tasks, and responsibilities that will be used to... effectiveness of the project and to provide information that will be used for the adaptive management program. ...
33 CFR 385.11 - Implementation process for projects.
Code of Federal Regulations, 2012 CFR
2012-07-01
... figure 1 in Appendix A of this part. Typical steps in this process involve: (a) Project Management Plan. The Project Management Plan describes the activities, tasks, and responsibilities that will be used to... effectiveness of the project and to provide information that will be used for the adaptive management program. ...
33 CFR 385.11 - Implementation process for projects.
Code of Federal Regulations, 2013 CFR
2013-07-01
... figure 1 in Appendix A of this part. Typical steps in this process involve: (a) Project Management Plan. The Project Management Plan describes the activities, tasks, and responsibilities that will be used to... effectiveness of the project and to provide information that will be used for the adaptive management program. ...
Evaluation of target efficiencies for solid-liquid separation steps in biofuels production.
Kochergin, Vadim; Miller, Keith
2011-01-01
Development of liquid biofuels has entered a new phase of large scale pilot demonstration. A number of plants that are in operation or under construction face the task of addressing the engineering challenges of creating a viable plant design, scaling up and optimizing various unit operations. It is well-known that separation technologies account for 50-70% of both capital and operating cost. Additionally, reduction of environmental impact creates technological challenges that increase project cost without adding to the bottom line. Different technologies vary in terms of selection of unit operations; however, solid-liquid separations are likely to be a major contributor to the overall project cost. Despite the differences in pretreatment approaches, similar challenges arise for solid-liquid separation unit operations. A typical process for ethanol production from biomass includes several solid-liquid separation steps, depending on which particular stream is targeted for downstream processing. The nature of biomass-derived materials makes it either difficult or uneconomical to accomplish complete separation in a single step. Therefore, setting realistic efficiency targets for solid-liquid separations is an important task that influences overall process recovery and economics. Experimental data will be presented showing typical characteristics for pretreated cane bagasse at various stages of processing into cellulosic ethanol. Results of generic material balance calculations will be presented to illustrate the influence of separation target efficiencies on overall process recoveries and characteristics of waste streams.
Direct coal liquefaction process
Rindt, John R.; Hetland, Melanie D.
1993-01-01
An improved multistep liquefaction process for organic carbonaceous mater which produces a virtually completely solvent-soluble carbonaceous liquid product. The solubilized product may be more amenable to further processing than liquid products produced by current methods. In the initial processing step, the finely divided organic carbonaceous material is treated with a hydrocarbonaceous pasting solvent containing from 10% and 100% by weight process-derived phenolic species at a temperature within the range of 300.degree. C. to 400.degree. C. for typically from 2 minutes to 120 minutes in the presence of a carbon monoxide reductant and an optional hydrogen sulfide reaction promoter in an amount ranging from 0 to 10% by weight of the moisture- and ash-free organic carbonaceous material fed to the system. As a result, hydrogen is generated via the water/gas shift reaction at a rate necessary to prevent condensation reactions. In a second step, the reaction product of the first step is hydrogenated.
Direct coal liquefaction process
Rindt, J.R.; Hetland, M.D.
1993-10-26
An improved multistep liquefaction process for organic carbonaceous mater which produces a virtually completely solvent-soluble carbonaceous liquid product. The solubilized product may be more amenable to further processing than liquid products produced by current methods. In the initial processing step, the finely divided organic carbonaceous material is treated with a hydrocarbonaceous pasting solvent containing from 10% and 100% by weight process-derived phenolic species at a temperature within the range of 300 C to 400 C for typically from 2 minutes to 120 minutes in the presence of a carbon monoxide reductant and an optional hydrogen sulfide reaction promoter in an amount ranging from 0 to 10% by weight of the moisture- and ash-free organic carbonaceous material fed to the system. As a result, hydrogen is generated via the water/gas shift reaction at a rate necessary to prevent condensation reactions. In a second step, the reaction product of the first step is hydrogenated.
Art Therapy on a Hospital Burn Unit: A Step towards Healing and Recovery.
ERIC Educational Resources Information Center
Russel, Johanna
1995-01-01
Describes how art therapy can benefit patients hospitalized due to severe burns, who suffer psychological as well as physical trauma. Outlines the psychological phases, identifies how burn patients typically experience their healing process, and discusses how art therapy can assist the patient at each stage of the recovery process. (JPS)
Overlap junctions for high coherence superconducting qubits
NASA Astrophysics Data System (ADS)
Wu, X.; Long, J. L.; Ku, H. S.; Lake, R. E.; Bal, M.; Pappas, D. P.
2017-07-01
Fabrication of sub-micron Josephson junctions is demonstrated using standard processing techniques for high-coherence, superconducting qubits. These junctions are made in two separate lithography steps with normal-angle evaporation. Most significantly, this work demonstrates that it is possible to achieve high coherence with junctions formed on aluminum surfaces cleaned in situ by Ar plasma before junction oxidation. This method eliminates the angle-dependent shadow masks typically used for small junctions. Therefore, this is conducive to the implementation of typical methods for improving margins and yield using conventional CMOS processing. The current method uses electron-beam lithography and an additive process to define the top and bottom electrodes. Extension of this work to optical lithography and subtractive processes is discussed.
Review of Manganese Processing for Production of TRIP/TWIP Steels, Part 2: Reduction Studies
NASA Astrophysics Data System (ADS)
Elliott, R.; Coley, K.; Mostaghel, S.; Barati, M.
2018-02-01
Production of ultrahigh-manganese steels is expected to result in significant increase in demand for low-carbon (LC) ferromanganese (FeMn) and silicomanganese (SiMn). Current manganese processing techniques are energy intensive and typically yield a high-carbon product. The present work therefore reviews available literature regarding carbothermic reduction of Mn oxides and ores, with the objective of identifying opportunities for future process development to mitigate the cost of LC FeMn and SiMn. In general, there is consensus that carbothermic reduction of Mn oxides and ores is limited by gasification of carbon. Conditions which enhance or bypass this step (e.g., by application of CH4) show higher rates of reduction at lower temperatures. This phenomenon has potential application in solid-state reduction of Mn ore. Other avenues for process development include optimization of the prereduction step in conventional FeMn production and metallothermic reduction as a secondary reduction step.
Wang, Chen; Lv, Shidong; Wu, Yuanshuang; Lian, Ming; Gao, Xuemei; Meng, Qingxiong
2016-10-01
Biluochun is a typical non-fermented tea and is also famous for its unique aroma in China. Few studies have been performed to evaluate the effect of the manufacturing process on the formation and content of its aroma. The volatile components were extracted at different manufacturing process steps of Biluochun green tea using fully automated headspace solid-phase microextraction (HS-SPME) and further characterised by gas chromatography-mass spectrometry (GC-MS). Among 67 volatile components collected, the fractions of linalool oxides, β-ionone, phenylacetaldehyde, aldehydes, ketones, and nitrogen compounds were increased while alcohols and hydrocarbons declined during the manufacturing process. The aroma compounds decreased the most during the drying steps. We identified a number of significantly changed components that can be used as markers and quality control during the producing process of Biluochun. The drying step played a major role in the aroma formation of green tea products and should be the most important step for quality control. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Identifying typical patterns of vulnerability: A 5-step approach based on cluster analysis
NASA Astrophysics Data System (ADS)
Sietz, Diana; Lüdeke, Matthias; Kok, Marcel; Lucas, Paul; Carsten, Walther; Janssen, Peter
2013-04-01
Specific processes that shape the vulnerability of socio-ecological systems to climate, market and other stresses derive from diverse background conditions. Within the multitude of vulnerability-creating mechanisms, distinct processes recur in various regions inspiring research on typical patterns of vulnerability. The vulnerability patterns display typical combinations of the natural and socio-economic properties that shape a systems' vulnerability to particular stresses. Based on the identification of a limited number of vulnerability patterns, pattern analysis provides an efficient approach to improving our understanding of vulnerability and decision-making for vulnerability reduction. However, current pattern analyses often miss explicit descriptions of their methods and pay insufficient attention to the validity of their groupings. Therefore, the question arises as to how do we identify typical vulnerability patterns in order to enhance our understanding of a systems' vulnerability to stresses? A cluster-based pattern recognition applied at global and local levels is scrutinised with a focus on an applicable methodology and practicable insights. Taking the example of drylands, this presentation demonstrates the conditions necessary to identify typical vulnerability patterns. They are summarised in five methodological steps comprising the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. Reflecting scale-dependent opportunities, a global study is able to support decision-making with insights into the up-scaling of interventions when available funds are limited. In contrast, local investigations encourage an outcome-based validation. This constitutes a crucial step in establishing the credibility of the patterns and hence their suitability for informing extension services and individual decisions. In this respect, working at the local level provides a clear advantage since, to a large extent, limitations in globally available observational data constrain such a validation on the global scale. Overall, the five steps are outlined in detail in order to facilitate and motivate the application of pattern recognition in other research studies concerned with vulnerability analysis, including future applications to different vulnerability frameworks. Such applications could promote the refinement of mechanisms in specific contexts and advance methodological adjustments. This would further increase the value of identifying typical patterns in the properties of socio-ecological systems for an improved understanding and management of the relation between these systems and particular stresses.
Cleanliness verification process at Martin Marietta Astronautics
NASA Astrophysics Data System (ADS)
King, Elizabeth A.; Giordano, Thomas J.
1994-06-01
The Montreal Protocol and the 1990 Clean Air Act Amendments mandate CFC-113, other chlorinated fluorocarbons (CFC's) and 1,1,1-Trichloroethane (TCA) be banned from production after December 31, 1995. In response to increasing pressures, the Air Force has formulated policy that prohibits purchase of these solvents for Air Force use after April 1, 1994. In response to the Air Force policy, Martin Marietta Astronautics is in the process of eliminating all CFC's and TCA from use at the Engineering Propulsion Laboratory (EPL), located on Air Force property PJKS. Gross and precision cleaning operations are currently performed on spacecraft components at EPL. The final step of the operation is a rinse with a solvent, typically CFC-113. This solvent is then analyzed for nonvolatile residue (NVR), particle count and total filterable solids (TFS) to determine cleanliness of the parts. The CFC-113 used in this process must be replaced in response to the above policies. Martin Marietta Astronautics, under contract to the Air Force, is currently evaluating and testing alternatives for a cleanliness verification solvent. Completion of test is scheduled for May, 1994. Evaluation of the alternative solvents follows a three step approach. This first is initial testing of solvents picked from literature searches and analysis. The second step is detailed testing of the top candidates from the initial test phase. The final step is implementation and validation of the chosen alternative(s). Testing will include contaminant removal, nonvolatile residue, material compatibility and propellant compatibility. Typical materials and contaminants will be tested with a wide range of solvents. Final results of the three steps will be presented as well as the implementation plan for solvent replacement.
Danker, Jared F; Anderson, John R
2007-04-15
In naturalistic algebra problem solving, the cognitive processes of representation and retrieval are typically confounded, in that transformations of the equations typically require retrieval of mathematical facts. Previous work using cognitive modeling has associated activity in the prefrontal cortex with the retrieval demands of algebra problems and activity in the posterior parietal cortex with the transformational demands of algebra problems, but these regions tend to behave similarly in response to task manipulations (Anderson, J.R., Qin, Y., Sohn, M.-H., Stenger, V.A., Carter, C.S., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261; Qin, Y., Carter, C.S., Silk, E.M., Stenger, A., Fissell, K., Goode, A., Anderson, J.R., 2004. The change of brain activation patterns as children learn algebra equation solving. Proc. Natl. Acad. Sci. 101, 5686-5691). With this study we attempt to isolate activity in these two regions by using a multi-step algebra task in which transformation (parietal) is manipulated in the first step and retrieval (prefrontal) is manipulated in the second step. Counter to our initial predictions, both brain regions were differentially active during both steps. We designed two cognitive models, one encompassing our initial assumptions and one in which both processes were engaged during both steps. The first model provided a poor fit to the behavioral and neural data, while the second model fit both well. This simultaneously emphasizes the strong relationship between retrieval and representation in mathematical reasoning and demonstrates that cognitive modeling can serve as a useful tool for understanding task manipulations in neuroimaging experiments.
Best Practices In Overset Grid Generation
NASA Technical Reports Server (NTRS)
Chan, William M.; Gomez, Reynaldo J., III; Rogers, Stuart E.; Buning, Pieter G.; Kwak, Dochan (Technical Monitor)
2002-01-01
Grid generation for overset grids on complex geometry can be divided into four main steps: geometry processing, surface grid generation, volume grid generation and domain connectivity. For each of these steps, the procedures currently practiced by experienced users are described. Typical problems encountered are also highlighted and discussed. Most of the guidelines are derived from experience on a variety of problems including space launch and return vehicles, subsonic transports with propulsion and high lift devices, supersonic vehicles, rotorcraft vehicles, and turbomachinery.
2016-01-01
Family Policy’s SECO program, which reviewed existing SECO metrics and data sources, as well as analytic methods of previ- ous research, to determine ...process that requires an iterative cycle of assessment of collected data (typically, but not solely, quantitative data) to determine whether SECO...RAND suggests five steps to develop and implement the SECO inter- nal monitoring system: Step 1. Describe the logic or theory of how activities are
Copper-catalyzed decarboxylative trifluoromethylation of allylic bromodifluoroacetates.
Ambler, Brett R; Altman, Ryan A
2013-11-01
The development of new synthetic fluorination reactions has important implications in medicinal, agricultural, and materials chemistries. Given the prevalence and accessibility of alcohols, methods to convert alcohols to trifluoromethanes are desirable. However, this transformation typically requires four-step processes, specialty chemicals, and/or stoichiometric metals to access the trifluoromethyl-containing product. A two-step copper-catalyzed decarboxylative protocol for converting allylic alcohols to trifluoromethanes is reported. Preliminary mechanistic studies distinguish this reaction from previously reported Cu-mediated reactions.
ASPECTS: an automation-assisted SPE method development system.
Li, Ming; Chou, Judy; King, Kristopher W; Yang, Liyu
2013-07-01
A typical conventional SPE method development (MD) process usually involves deciding the chemistry of the sorbent and eluent based on information about the analyte; experimentally preparing and trying out various combinations of adsorption chemistry and elution conditions; quantitatively evaluating the various conditions; and comparing quantitative results from all combination of conditions to select the best condition for method qualification. The second and fourth steps have mostly been performed manually until now. We developed an automation-assisted system that expedites the conventional SPE MD process by automating 99% of the second step, and expedites the fourth step by automatically processing the results data and presenting it to the analyst in a user-friendly format. The automation-assisted SPE MD system greatly saves the manual labor in SPE MD work, prevents analyst errors from causing misinterpretation of quantitative results, and shortens data analysis and interpretation time.
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
Jeong, Chanyoung; Choi, Chang-Hwan
2012-02-01
Conventional electrochemical anodizing processes of metals such as aluminum typically produce planar and homogeneous nanopore structures. If hydrophobically treated, such 2D planar and interconnected pore structures typically result in lower contact angle and larger contact angle hysteresis than 3D disconnected pillar structures and, hence, exhibit inferior superhydrophobic efficiency. In this study, we demonstrate for the first time that the anodizing parameters can be engineered to design novel pillar-on-pore (POP) hybrid nanostructures directly in a simple one-step fabrication process so that superior surface superhydrophobicity can also be realized effectively from the electrochemical anodization process. On the basis of the characteristic of forming a self-ordered porous morphology in a hexagonal array, the modulation of anodizing voltage and duration enabled the formulation of the hybrid-type nanostructures having controlled pillar morphology on top of a porous layer in both mild and hard anodization modes. The hybrid nanostructures of the anodized metal oxide layer initially enhanced the surface hydrophilicity significantly (i.e., superhydrophilic). However, after a hydrophobic monolayer coating, such hybrid nanostructures then showed superior superhydrophobic nonwetting properties not attainable by the plain nanoporous surfaces produced by conventional anodization conditions. The well-regulated anodization process suggests that electrochemical anodizing can expand its usefulness and efficacy to render various metallic substrates with great superhydrophilicity or -hydrophobicity by directly realizing pillar-like structures on top of a self-ordered nanoporous array through a simple one-step fabrication procedure.
Strategies for Derisking Translational Processes for Biomedical Technologies.
Abou-El-Enein, Mohamed; Duda, Georg N; Gruskin, Elliott A; Grainger, David W
2017-02-01
Inefficient translational processes for technology-oriented biomedical research have led to some prominent and frequent failures in the development of many leading drug candidates, several designated investigational drugs, and some medical devices, as well as documented patient harm and postmarket product withdrawals. Derisking this process, particularly in the early stages, should increase translational efficiency and streamline resource utilization, especially in an academic setting. In this opinion article, we identify a 12-step guideline for reducing risks typically associated with translating medical technologies as they move toward prototypes, preclinical proof of concept, and possible clinical testing. Integrating the described 12-step process should prove valuable for improving how early-stage academic biomedical concepts are cultivated, culled, and manicured toward intended clinical applications. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ethnomathematics elements in Batik Bali using backpropagation method
NASA Astrophysics Data System (ADS)
Lestari, Mei; Irawan, Ari; Rahayu, Wanti; Wayan Parwati, Ni
2018-05-01
Batik is one of traditional arts that has been established by the UNESCO as Indonesia’s cultural heritage. Batik has varieties and motifs, and each motifs has its own uniqueness but seems similar, that makes it difficult to identify. This study aims to develop an application that can identify typical batik Bali with etnomatematics elements on it. Etnomatematics is a study that shows relation between culture and mathematics concepts. Etnomatematics in Batik Bali is more to geometrical concept in line of strong Balinese culture element. The identification process is use backpropagation method. Steps of backpropagation methods are image processing (including scalling and tresholding image process). Next step is insert the processed image to an artificial neural network. This study resulted an accuracy of identification of batik Bali that has Etnomatematics elements on it.
Biomass Processing using Ionic Liquids for Jet Fuel Production
2014-04-09
lignocellulosic biomass. Biomass consists predominantly of three biopolymers— lignin , hemicellulose and cellulose. For fuel production, it is necessary to...hydrocarbons. The lignin and cellulose, however, have very low solubility in conventional solvents making processing difficult. Typically a pretreatment step...is used to break up the lignin and make the cellulose accessible to further hydrolysis to glucose. Pretreatment, however, is one of the most
The establishment of science-based long-term environmental management goals is just the first step in what is typically a decades-long process to restore estuarine and coastal ecosystems. In addition to adequate monitoring and reporting, maintaining public interest, financial sup...
Process and Product: Creating Stories with Deaf Students
ERIC Educational Resources Information Center
Enns, Catherine; Hall, Ricki; Isaac, Becky; MacDonald, Patricia
2007-01-01
This article describes the implementation of one element of an adapted language arts curriculum for Deaf students in a bilingual (American Sign Language and English) educational setting. It examines the implementation of writing workshops in three elementary classrooms in a school for Deaf students. The typical steps of preparing/planning,…
When New Boundaries Abound: A Systematic Approach to Redistricting.
ERIC Educational Resources Information Center
Creighton, Roger L.; Irwin, Armond J.
1994-01-01
A systematic approach to school redistricting that was developed over the past half-dozen years utilizes a computer. Crucial to achieving successful results are accuracy of data, enrollment forecasting, and citizen participation. Outlines the major steps of a typical redistricting study. One figure illustrates the redistricting process. (MLF)
Fermentation technologies for ethanol production from wheat straw by a recombinant bacterium
USDA-ARS?s Scientific Manuscript database
Wheat straw, a globally abundant byproduct of wheat production, contains about 70% carbohydrate that could potentially be used as a low cost feedstock for production of fuel ethanol. Typically four process steps are involved in the production of ethanol from any lignocellulosic feedstock – pretreat...
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
The Package-Based Development Process in the Flight Dynamics Division
NASA Technical Reports Server (NTRS)
Parra, Amalia; Seaman, Carolyn; Basili, Victor; Kraft, Stephen; Condon, Steven; Burke, Steven; Yakimovich, Daniil
1997-01-01
The Software Engineering Laboratory (SEL) has been operating for more than two decades in the Flight Dynamics Division (FDD) and has adapted to the constant movement of the software development environment. The SEL's Improvement Paradigm shows that process improvement is an iterative process. Understanding, Assessing and Packaging are the three steps that are followed in this cyclical paradigm. As the improvement process cycles back to the first step, after having packaged some experience, the level of understanding will be greater. In the past, products resulting from the packaging step have been large process documents, guidebooks, and training programs. As the technical world moves toward more modularized software, we have made a move toward more modularized software development process documentation, as such the products of the packaging step are becoming smaller and more frequent. In this manner, the QIP takes on a more spiral approach rather than a waterfall. This paper describes the state of the FDD in the area of software development processes, as revealed through the understanding and assessing activities conducted by the COTS study team. The insights presented include: (1) a characterization of a typical FDD Commercial Off the Shelf (COTS) intensive software development life-cycle process, (2) lessons learned through the COTS study interviews, and (3) a description of changes in the SEL due to the changing and accelerating nature of software development in the FDD.
Recent developments in membrane-based separations in biotechnology processes: review.
Rathore, A S; Shirke, A
2011-01-01
Membrane-based separations are the most ubiquitous unit operations in biotech processes. There are several key reasons for this. First, they can be used with a large variety of applications including clarification, concentration, buffer exchange, purification, and sterilization. Second, they are available in a variety of formats, such as depth filtration, ultrafiltration, diafiltration, nanofiltration, reverse osmosis, and microfiltration. Third, they are simple to operate and are generally robust toward normal variations in feed material and operating parameters. Fourth, membrane-based separations typically require lower capital cost when compared to other processing options. As a result of these advantages, a typical biotech process has anywhere from 10 to 20 membrane-based separation steps. In this article we review the major developments that have occurred on this topic with a focus on developments in the last 5 years.
AMPS data management concepts. [Atmospheric, Magnetospheric and Plasma in Space experiment
NASA Technical Reports Server (NTRS)
Metzelaar, P. N.
1975-01-01
Five typical AMPS experiments were formulated to allow simulation studies to verify data management concepts. Design studies were conducted to analyze these experiments in terms of the applicable procedures, data processing and displaying functions. Design concepts for AMPS data management system are presented which permit both automatic repetitive measurement sequences and experimenter-controlled step-by-step procedures. Extensive use is made of a cathode ray tube display, the experimenters' alphanumeric keyboard, and the computer. The types of computer software required by the system and the possible choices of control and display procedures available to the experimenter are described for several examples. An electromagnetic wave transmission experiment illustrates the methods used to analyze data processing requirements.
Application of an enhanced discrete element method to oil and gas drilling processes
NASA Astrophysics Data System (ADS)
Ubach, Pere Andreu; Arrufat, Ferran; Ring, Lev; Gandikota, Raju; Zárate, Francisco; Oñate, Eugenio
2016-03-01
The authors present results on the use of the discrete element method (DEM) for the simulation of drilling processes typical in the oil and gas exploration industry. The numerical method uses advanced DEM techniques using a local definition of the DEM parameters and combined FEM-DEM procedures. This paper presents a step-by-step procedure to build a DEM model for analysis of the soil region coupled to a FEM model for discretizing the drilling tool that reproduces the drilling mechanics of a particular drill bit. A parametric study has been performed to determine the model parameters in order to maintain accurate solutions with reduced computational cost.
ERIC Educational Resources Information Center
Amaral, Luiz; Meurers, Detmar; Ziai, Ramon
2011-01-01
Intelligent language tutoring systems (ILTS) typically analyze learner input to diagnose learner language properties and provide individualized feedback. Despite a long history of ILTS research, such systems are virtually absent from real-life foreign language teaching (FLT). Taking a step toward more closely linking ILTS research to real-life…
Home Language Survey Data Quality Self-Assessment. REL 2017-198
ERIC Educational Resources Information Center
Henry, Susan F.; Mello, Dan; Avery, Maria-Paz; Parker, Caroline; Stafford, Erin
2017-01-01
Most state departments of education across the United States recommend or require that districts use a home language survey as the first step in a multistep process of identifying students who qualify for English learner student services. School districts typically administer the home language survey to parents and guardians during a student's…
NASA Astrophysics Data System (ADS)
Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.
2012-10-01
We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.
Nonlinear fluctuations-induced rate equations for linear birth-death processes
NASA Astrophysics Data System (ADS)
Honkonen, J.
2008-05-01
The Fock-space approach to the solution of master equations for one-step Markov processes is reconsidered. It is shown that in birth-death processes with an absorbing state at the bottom of the occupation-number spectrum and occupation-number independent annihilation probability of occupation-number fluctuations give rise to rate equations drastically different from the polynomial form typical of birth-death processes. The fluctuation-induced rate equations with the characteristic exponential terms are derived for Mikhailov’s ecological model and Lanchester’s model of modern warfare.
Charge transfer to ground-state ions produces free electrons
You, D.; Fukuzawa, H.; Sakakibara, Y.; Takanashi, T.; Ito, Y.; Maliyar, G. G.; Motomura, K.; Nagaya, K.; Nishiyama, T.; Asa, K.; Sato, Y.; Saito, N.; Oura, M.; Schöffler, M.; Kastirke, G.; Hergenhahn, U.; Stumpf, V.; Gokhberg, K.; Kuleff, A. I.; Cederbaum, L. S.; Ueda, K
2017-01-01
Inner-shell ionization of an isolated atom typically leads to Auger decay. In an environment, for example, a liquid or a van der Waals bonded system, this process will be modified, and becomes part of a complex cascade of relaxation steps. Understanding these steps is important, as they determine the production of slow electrons and singly charged radicals, the most abundant products in radiation chemistry. In this communication, we present experimental evidence for a so-far unobserved, but potentially very important step in such relaxation cascades: Multiply charged ionic states after Auger decay may partially be neutralized by electron transfer, simultaneously evoking the creation of a low-energy free electron (electron transfer-mediated decay). This process is effective even after Auger decay into the dicationic ground state. In our experiment, we observe the decay of Ne2+ produced after Ne 1s photoionization in Ne–Kr mixed clusters. PMID:28134238
In-cell overlay metrology by using optical metrology tool
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, DongYoung; Oh, Eungryong; Choi, Ahlin; Park, Hyowon; Liang, Waley; Choi, DongSub; Kim, Nakyoon; Lee, Jeongpyo; Pandev, Stilian; Jeon, Sanghuck; Robinson, John C.
2018-03-01
Overlay is one of the most critical process control steps of semiconductor manufacturing technology. A typical advanced scheme includes an overlay feedback loop based on after litho optical imaging overlay metrology on scribeline targets. The after litho control loop typically involves high frequency sampling: every lot or nearly every lot. An after etch overlay metrology step is often included, at a lower sampling frequency, in order to characterize and compensate for bias. The after etch metrology step often involves CD-SEM metrology, in this case in-cell and ondevice. This work explores an alternative approach using spectroscopic ellipsometry (SE) metrology and a machine learning analysis technique. Advanced 1x nm DRAM wafers were prepared, including both nominal (POR) wafers with mean overlay offsets, as well as DOE wafers with intentional across wafer overlay modulation. After litho metrology was measured using optical imaging metrology, as well as after etch metrology using both SE and CD-SEM for comparison. We investigate 2 types of machine learning techniques with SE data: model-less and model-based, showing excellent performance for after etch in-cell on-device overlay metrology.
Clegg, Paul S; Tavacoli, Joe W; Wilde, Pete J
2016-01-28
Multiple emulsions have great potential for application in food science as a means to reduce fat content or for controlled encapsulation and release of actives. However, neither production nor stability is straightforward. Typically, multiple emulsions are prepared via two emulsification steps and a variety of approaches have been deployed to give long-term stability. It is well known that multiple emulsions can be prepared in a single step by harnessing emulsion inversion, although the resulting emulsions are usually short lived. Recently, several contrasting methods have been demonstrated which give rise to stable multiple emulsions via one-step production processes. Here we review the current state of microfluidic, polymer-stabilized and particle-stabilized approaches; these rely on phase separation, the role of electrolyte and the trapping of solvent with particles respectively.
Miyata, Kazuki; Tracey, John; Miyazawa, Keisuke; Haapasilta, Ville; Spijker, Peter; Kawagoe, Yuta; Foster, Adam S; Tsukamoto, Katsuo; Fukuma, Takeshi
2017-07-12
The microscopic understanding of the crystal growth and dissolution processes have been greatly advanced by the direct imaging of nanoscale step flows by atomic force microscopy (AFM), optical interferometry, and X-ray microscopy. However, one of the most fundamental events that govern their kinetics, namely, atomistic events at the step edges, have not been well understood. In this study, we have developed high-speed frequency modulation AFM (FM-AFM) and enabled true atomic-resolution imaging in liquid at ∼1 s/frame, which is ∼50 times faster than the conventional FM-AFM. With the developed AFM, we have directly imaged subnanometer-scale surface structures around the moving step edges of calcite during its dissolution in water. The obtained images reveal that the transition region with typical width of a few nanometers is formed along the step edges. Building upon insight in previous studies, our simulations suggest that the transition region is most likely to be a Ca(OH) 2 monolayer formed as an intermediate state in the dissolution process. On the basis of this finding, we improve our understanding of the atomistic dissolution model of calcite in water. These results open up a wide range of future applications of the high-speed FM-AFM to the studies on various dynamic processes at solid-liquid interfaces with true atomic resolution.
Belić, Domagoj; Shawrav, Mostafa M; Bertagnolli, Emmerich
2017-01-01
This work presents a highly effective approach for the chemical purification of directly written 2D and 3D gold nanostructures suitable for plasmonics, biomolecule immobilisation, and nanoelectronics. Gold nano- and microstructures can be fabricated by one-step direct-write lithography process using focused electron beam induced deposition (FEBID). Typically, as-deposited gold nanostructures suffer from a low Au content and unacceptably high carbon contamination. We show that the undesirable carbon contamination can be diminished using a two-step process – a combination of optimized deposition followed by appropriate postdeposition cleaning. Starting from the common metal-organic precursor Me2-Au-tfac, it is demonstrated that the Au content in pristine FEBID nanostructures can be increased from 30 atom % to as much as 72 atom %, depending on the sustained electron beam dose. As a second step, oxygen-plasma treatment is established to further enhance the Au content in the structures, while preserving their morphology to a high degree. This two-step process represents a simple, feasible and high-throughput method for direct writing of purer gold nanostructures that can enable their future use for demanding applications. PMID:29259868
From Rejected to Accepted: Part 1--Strategies for Revising and Resubmitting a Manuscript
ERIC Educational Resources Information Center
Stivers, Jan; Cramer, Sharon
2017-01-01
Manuscript rejection is a fact of life for academics, and should be seen as just one step in a process of revision and resubmission that typically results in publication. This two-part article offers suggestions to help authors take action on their rejected manuscripts, including analyzing reviewer feedback, revising judiciously, and making…
Williams, Calum; Rughoobur, Girish; Flewitt, Andrew J; Wilkinson, Timothy D
2016-11-10
A single-step fabrication method is presented for ultra-thin, linearly variable optical bandpass filters (LVBFs) based on a metal-insulator-metal arrangement using modified evaporation deposition techniques. This alternate process methodology offers reduced complexity and cost in comparison to conventional techniques for fabricating LVBFs. We are able to achieve linear variation of insulator thickness across a sample, by adjusting the geometrical parameters of a typical physical vapor deposition process. We demonstrate LVBFs with spectral selectivity from 400 to 850 nm based on Ag (25 nm) and MgF2 (75-250 nm). Maximum spectral transmittance is measured at ∼70% with a Q-factor of ∼20.
Brentner, Laura B; Eckelman, Matthew J; Zimmerman, Julie B
2011-08-15
The use of algae as a feedstock for biodiesel production is a rapidly growing industry, in the United States and globally. A life cycle assessment (LCA) is presented that compares various methods, either proposed or under development, for algal biodiesel to inform the most promising pathways for sustainable full-scale production. For this analysis, the system is divided into five distinct process steps: (1) microalgae cultivation, (2) harvesting and/or dewatering, (3) lipid extraction, (4) conversion (transesterification) into biodiesel, and (5) byproduct management. A number of technology options are considered for each process step and various technology combinations are assessed for their life cycle environmental impacts. The optimal option for each process step is selected yielding a best case scenario, comprised of a flat panel enclosed photobioreactor and direct transesterification of algal cells with supercritical methanol. For a functional unit of 10 GJ biodiesel, the best case production system yields a cumulative energy demand savings of more than 65 GJ, reduces water consumption by 585 m(3) and decreases greenhouse gas emissions by 86% compared to a base case scenario typical of early industrial practices, highlighting the importance of technological innovation in algae processing and providing guidance on promising production pathways.
Fuhrman, Susan I.; Redfern, Mark S.; Jennings, J. Richard; Perera, Subashan; Nebes, Robert D.; Furman, Joseph M.
2013-01-01
Postural dual-task studies have demonstrated effects of various executive function components on gait and postural control in older adults. The purpose of the study was to explore the role of inhibition during lateral step initiation. Forty older adults participated (range 70–94 yr). Subjects stepped to the left or right in response to congruous and incongruous visual cues that consisted of left and right arrows appearing on left or right sides of a monitor. The timing of postural adjustments was identified by inflection points in the vertical ground reaction forces (VGRF) measured separately under each foot. Step responses could be classified into preferred and nonpreferred step behavior based on the number of postural adjustments that were made. Delays in onset of the first postural adjustment (PA1) and liftoff (LO) of the step leg during preferred steps progressively increased among the simple, choice, congruous, and incongruous tasks, indicating interference in processing the relevant visuospatial cue. Incongruous cues induced subjects to make more postural adjustments than they typically would (i.e., nonpreferred steps), representing errors in selection of the appropriate motor program. During these nonpreferred steps, the onset of the PA1 was earlier than during the preferred steps, indicating a failure to inhibit an inappropriate initial postural adjustment. The functional consequence of the additional postural adjustments was a delay in the LO compared with steps in which they did not make an error. These results suggest that deficits in inhibitory function may detrimentally affect step decision processing, by delaying voluntary step responses. PMID:23114211
High-throughput screening of chromatographic separations: IV. Ion-exchange.
Kelley, Brian D; Switzer, Mary; Bastek, Patrick; Kramarczyk, Jack F; Molnar, Kathleen; Yu, Tianning; Coffman, Jon
2008-08-01
Ion-exchange (IEX) chromatography steps are widely applied in protein purification processes because of their high capacity, selectivity, robust operation, and well-understood principles. Optimization of IEX steps typically involves resin screening and selection of the pH and counterion concentrations of the load, wash, and elution steps. Time and material constraints associated with operating laboratory columns often preclude evaluating more than 20-50 conditions during early stages of process development. To overcome this limitation, a high-throughput screening (HTS) system employing a robotic liquid handling system and 96-well filterplates was used to evaluate various operating conditions for IEX steps for monoclonal antibody (mAb) purification. A screening study for an adsorptive cation-exchange step evaluated eight different resins. Sodium chloride concentrations defining the operating boundaries of product binding and elution were established at four different pH levels for each resin. Adsorption isotherms were measured for 24 different pH and salt combinations for a single resin. An anion-exchange flowthrough step was then examined, generating data on mAb adsorption for 48 different combinations of pH and counterion concentration for three different resins. The mAb partition coefficients were calculated and used to estimate the characteristic charge of the resin-protein interaction. Host cell protein and residual Protein A impurity levels were also measured, providing information on selectivity within this operating window. The HTS system shows promise for accelerating process development of IEX steps, enabling rapid acquisition of large datasets addressing the performance of the chromatography step under many different operating conditions. (c) 2008 Wiley Periodicals, Inc.
Hot rolling of thick uranium molybdenum alloys
DeMint, Amy L.; Gooch, Jack G.
2015-11-17
Disclosed herein are processes for hot rolling billets of uranium that have been alloyed with about ten weight percent molybdenum to produce cold-rollable sheets that are about one hundred mils thick. In certain embodiments, the billets have a thickness of about 7/8 inch or greater. Disclosed processes typically involve a rolling schedule that includes a light rolling pass and at least one medium rolling pass. Processes may also include reheating the rolling stock and using one or more heavy rolling passes, and may include an annealing step.
Recent progress on understanding the mechanisms of amyloid nucleation.
Chatani, Eri; Yamamoto, Naoki
2018-04-01
Amyloid fibrils are supramolecular protein assemblies with a fibrous morphology and cross-β structure. The formation of amyloid fibrils typically follows a nucleation-dependent polymerization mechanism, in which a one-step nucleation scheme has widely been accepted. However, a variety of oligomers have been identified in early stages of fibrillation, and a nucleated conformational conversion (NCC) mechanism, in which oligomers serve as a precursor of amyloid nucleation and convert to amyloid nuclei, has been proposed. This development has raised the need to consider more complicated multi-step nucleation processes in addition to the simplest one-step process, and evidence for the direct involvement of oligomers as nucleation precursors has been obtained both experimentally and theoretically. Interestingly, the NCC mechanism has some analogy with the two-step nucleation mechanism proposed for inorganic and organic crystals and protein crystals, although a more dramatic conformational conversion of proteins should be considered in amyloid nucleation. Clarifying the properties of the nucleation precursors of amyloid fibrils in detail, in comparison with those of crystals, will allow a better understanding of the nucleation of amyloid fibrils and pave the way to develop techniques to regulate it.
Reinehr, Christian Oliveira; Treichel, Helen; Tres, Marcus Vinicius; Steffens, Juliana; Brião, Vandré Barbosa; Colla, Luciane Maria
2017-06-01
In this study, we developed a simplified method for producing, separating, and concentrating lipases derived from solid-state fermentation of agro-industrial residues by filamentous fungi. First, we used Aspergillus niger to produce lipases with hydrolytic activity. We analyzed the separation and concentration of enzymes using membrane separation processes. The sequential use of microfiltration and ultrafiltration processes made it possible to obtain concentrates with enzymatic activities much higher than those in the initial extract. The permeate flux was higher than 60 L/m 2 h during microfiltration using 20- and 0.45-µm membranes and during ultrafiltration using 100- and 50-kDa membranes, where fouling was reversible during the filtration steps, thereby indicating that the fouling may be removed by cleaning processes. These results demonstrate the feasibility of lipase production using A. niger by solid-state fermentation of agro-industrial residues, followed by successive tangential filtration with membranes, which simplify the separation and concentration steps that are typically required in downstream processes.
From Rejected to Accepted: Part 2--Preparing a Rejected Manuscript for a New Journal
ERIC Educational Resources Information Center
Stivers, Jan; Cramer, Sharon F.
2017-01-01
Manuscript rejection is a fact of life for academics, and should be seen as just one step in a process of revision and resubmission that typically results in publication. This manuscript is the second in a two-part series offering suggestions to help authors take action on their rejected manuscripts, including analyzing reviewer feedback, revising…
ERIC Educational Resources Information Center
McKenney, Elizabeth L. W.; Waldron, Nancy; Conroy, Maureen
2013-01-01
This study describes the integrity with which 3 general education middle school teachers implemented functional analyses (FA) of appropriate behavior for students who typically engaged in disruption. A 4-step model consistent with behavioral consultation was used to support the assessment process. All analyses were conducted during ongoing…
Aging Optimization of Aluminum-Lithium Alloy C458 for Application to Cryotank Structures
NASA Technical Reports Server (NTRS)
Sova, B. J.; Sankaran, K. K.; Babel, H.; Farahmand, B.; Rioja, R.
2003-01-01
Compared with aluminum alloys such as 2219, which is widely used in space vehicle for cryogenic tanks and unpressurized structures, aluminum-lithium alloys possess attractive combinations of lower density and higher modulus along with comparable mechanical properties. These characteristics have resulted in the successful use of the aluminum-lithium alloy 2195 (Al-1.0 Li-4.0 Cu-0.4 Mg-0.4 Ag-0.12 Zr) for the Space Shuttle External Tank, and the consideration of newer U.S. aluminum-lithium alloys such as L277 and C458 for future space vehicles. These newer alloys generally have lithium content less than 2 wt. % and their composition and processing have been carefully tailored to increase the toughness and reduce the mechanical property anisotropy of the earlier generation alloys such 2090 and 8090. Alloy processing, particularly the aging treatment, has a significant influence on the strength-toughness combinations and their dependence on service environments for aluminum-lithium alloys. Work at NASA Marshall Space Flight Center on alloy 2195 has shown that the cryogenic toughness can be improved by employing a two-step aging process. This is accomplished by aging at a lower temperature in the first step to suppress nucleation of the strengthening precipitate at sub-grain boundaries while promoting nucleation in the interior of the grains. Second step aging at the normal aging temperature results in precipitate growth to the optimum size. A design of experiments aging study was conducted for plate. To achieve the T8 temper, Alloy C458 (Al-1.8 Li-2.7 Cu-0.3 Mg-0.08 Zr-0.3 Mn-0.6 Zn) is typically aged at 300F for 24hours. In this study, a two-step aging treatment was developed through a comprehensive 2(exp 4) full factorial design of experiments study and the typical one-step aging used as a reference. Based on the higher lithium content of C458 compared with 2195, the first step aging temperature was varied between 175F and 250F. The second step aging temperatures was varied between 275F and 325F, which is in the range of the single-step aging temperature. The results of the design of experiments used for the T8 temper as well as a smaller set of experiments for the T6 temper will be presented. The process of selecting the optimum aging treatment, based on the measured mechanical properties at room and cryogenic temperature as well as the observed deformation mechanisms, will be presented in detail. The implications for the use of alloy C458 in cryotanks will be discussed.
Aging Optimization of Aluminum-Lithium Alloy C458 for Application to Cryotank Structures
NASA Technical Reports Server (NTRS)
Sova, B. J.; Sankaran, K. K.; Babel, H.; Farahmand, B.; Rioja, R.
2003-01-01
Compared with aluminum alloys such as 2219, which is widely used in space vehicle for cryogenic tanks and unpressurized structures, aluminum-lithium alloys possess attractive combinations of lower density and higher modulus along with comparable mechanical properties. These characteristics have resulted in the successful use of the aluminum-lithium alloy 2195 (Al-1.0 Li-4.0 Cu-0.4 Mg-0.4 Ag-0.12 Zr) for the Space Shuttle External Tank, and the consideration of newer U.S. aluminum-lithium alloys such as L277 and C458 for future space vehicles. These newer alloys generally have lithium content less than 2 wt. % and their composition and processing have been carefully tailored to increase the toughness and reduce the mechanical property anisotropy of the earlier generation alloys such 2090 and 8090. Alloy processing, particularly the aging treatment, has a significant influence on the strength-toughness combinations and their dependence on service environments for aluminum-lithium alloys. Work at NASA Marshall Space Flight Center on alloy 2195 has shown that the cryogenic toughness can be improved by employing a two-step aging process. This is accomplished by aging at a lower temperature in the first step to suppress nucleation of the strengthening precipitate at sub-grain boundaries while promoting nucleation in the interior of the grains. Second step aging at the normal aging temperature results in precipitate growth to the optimum size. A design of experiments aging study was conducted for plate. To achieve the T8 temper, Alloy C458 (Al-1.8 Li-2.7 Cu-0.3 Mg- 0.08 Zr-0.3 Mn-0.6 Zn) is typically aged at 300 F for 24 hours. In this study, a two-step aging treatment was developed through a comprehensive 24 full factorial design of experiments study and the typical one-step aging used as a reference. Based on the higher lithium content of C458 compared with 2195, the first step aging temperature was varied between 175 F and 250 F. The second step aging temperatures was varied between 275 F and 325 F, which is in the range of the single-step aging temperature. The results of the design of experiments used for the T8 temper as well as a smaller set of experiments for the T6 temper will be presented. The process of selecting the optimum aging treatment, based on the measured mechanical properties at room and cryogenic temperature as well as the observed deformation mechanisms, will be presented in detail. The implications for the use of alloy C458 in cryotanks will be discussed.
Zimmerman, Margaret S
2018-01-01
This paper explores the reproductive health-related information seeking of low-income women that has been found to be affected by digital divide disparities. A survey conducted with 70 low-income women explores what information sources women use for reproductive health-related information seeking, what process they go through to find information, and if they are using sources that they trust. The findings of this study detail a two-step information-seeking process that typically includes a preference for personal, informal sources. Women of this income group often rely upon sources that they do not consider credible. While there have been many studies on the end effects of a lack of accurate and accessible reproductive health information, little research has been conducted to examine the reproductive healthcare information-seeking patterns of women who live in poverty.
Statistical patterns of visual search for hidden objects
Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.
2012-01-01
The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829
Bosse, Jens B.; Tanneti, Nikhila S.; Hogue, Ian B.; Enquist, Lynn W.
2015-01-01
Dual-color live cell fluorescence microscopy of fast intracellular trafficking processes, such as axonal transport, requires rapid switching of illumination channels. Typical broad-spectrum sources necessitate the use of mechanical filter switching, which introduces delays between acquisition of different fluorescence channels, impeding the interpretation and quantification of highly dynamic processes. Light Emitting Diodes (LEDs), however, allow modulation of excitation light in microseconds. Here we provide a step-by-step protocol to enable any scientist to build a research-grade LED illuminator for live cell microscopy, even without prior experience with electronics or optics. We quantify and compare components, discuss our design considerations, and demonstrate the performance of our LED illuminator by imaging axonal transport of herpes virus particles with high temporal resolution. PMID:26600461
NASA Astrophysics Data System (ADS)
Yi, Guodong; Li, Jin
2018-03-01
The master cylinder hydraulic system is the core component of the fineblanking press that seriously affects the machine performance. A key issue in the design of the master cylinder hydraulic system is dealing with the heavy shock loads in the fineblanking process. In this paper, an equivalent model of the master cylinder hydraulic system is established based on typical process parameters for practical fineblanking; then, the response characteristics of the master cylinder slider to the step changes in the load and control current are analyzed, and lastly, control strategies for the proportional valve are studied based on the impact of the control parameters on the kinetic stability of the slider. The results show that the kinetic stability of the slider is significantly affected by the step change of the control current, while it is slightly affected by the step change of the system load, which can be improved by adjusting the flow rate and opening time of the proportional valve.
NASA Astrophysics Data System (ADS)
Evlyukhin, E.; Museur, L.; Traore, M.; Perruchot, C.; Zerr, A.; Kanaev, A.
2015-12-01
The synthesis of highly biocompatible polymers is important for modern biotechnologies and medicine. Here, we report a unique process based on a two-step high-pressure ramp (HPR) for the ultrafast and efficient bulk polymerization of 2-(hydroxyethyl)methacrylate (HEMA) at room temperature without photo- and thermal activation or addition of initiator. The HEMA monomers are first activated during the compression step but their reactivity is hindered by the dense glass-like environment. The rapid polymerization occurs in only the second step upon decompression to the liquid state. The conversion yield was found to exceed 90% in the recovered samples. The gel permeation chromatography evidences the overriding role of HEMA2•• biradicals in the polymerization mechanism. The HPR process extends the application field of HP-induced polymerization, beyond the family of crystallized monomers considered up today. It is also an appealing alternative to typical photo- or thermal activation, allowing the efficient synthesis of highly pure organic materials.
Gruber, Pia; Marques, Marco P C; Sulzer, Philipp; Wohlgemuth, Roland; Mayr, Torsten; Baganz, Frank; Szita, Nicolas
2017-06-01
Monitoring and control of pH is essential for the control of reaction conditions and reaction progress for any biocatalytic or biotechnological process. Microfluidic enzymatic reactors are increasingly proposed for process development, however typically lack instrumentation, such as pH monitoring. We present a microfluidic side-entry reactor (μSER) and demonstrate for the first time real-time pH monitoring of the progression of an enzymatic reaction in a microfluidic reactor as a first step towards achieving pH control. Two different types of optical pH sensors were integrated at several positions in the reactor channel which enabled pH monitoring between pH 3.5 and pH 8.5, thus a broader range than typically reported. The sensors withstood the thermal bonding temperatures typical of microfluidic device fabrication. Additionally, fluidic inputs along the reaction channel were implemented to adjust the pH of the reaction. Time-course profiles of pH were recorded for a transketolase and a penicillin G acylase catalyzed reaction. Without pH adjustment, the former showed a pH increase of 1 pH unit and the latter a pH decrease of about 2.5 pH units. With pH adjustment, the pH drop of the penicillin G acylase catalyzed reaction was significantly attenuated, the reaction condition kept at a pH suitable for the operation of the enzyme, and the product yield increased. This contribution represents a further step towards fully instrumented and controlled microfluidic reactors for biocatalytic process development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Saving Material with Systematic Process Designs
NASA Astrophysics Data System (ADS)
Kerausch, M.
2011-08-01
Global competition is forcing the stamping industry to further increase quality, to shorten time-to-market and to reduce total cost. Continuous balancing between these classical time-cost-quality targets throughout the product development cycle is required to ensure future economical success. In today's industrial practice, die layout standards are typically assumed to implicitly ensure the balancing of company specific time-cost-quality targets. Although die layout standards are a very successful approach, there are two methodical disadvantages. First, the capabilities for tool design have to be continuously adapted to technological innovations; e.g. to take advantage of the full forming capability of new materials. Secondly, the great variety of die design aspects have to be reduced to a generic rule or guideline; e.g. binder shape, draw-in conditions or the use of drawbeads. Therefore, it is important to not overlook cost or quality opportunities when applying die design standards. This paper describes a systematic workflow with focus on minimizing material consumption. The starting point of the investigation is a full process plan for a typical structural part. All requirements are definedaccording to a predefined set of die design standards with industrial relevance are fulfilled. In a first step binder and addendum geometry is systematically checked for material saving potentials. In a second step, blank shape and draw-in are adjusted to meet thinning, wrinkling and springback targets for a minimum blank solution. Finally the identified die layout is validated with respect to production robustness versus splits, wrinkles and springback. For all three steps the applied methodology is based on finite element simulation combined with a stochastical variation of input variables. With the proposed workflow a well-balanced (time-cost-quality) production process assuring minimal material consumption can be achieved.
Pozzolanic filtration/solidification of radionuclides in nuclear reactor cooling water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Englehardt, J.D.; Peng, C.
1995-12-31
Laboratory studies to investigate the feasibility of one- and two-step processes for precipitation/coprecipitating radionuclides from nuclear reactor cooling water, filtering with pozzolanic filter aid, and solidifying, are reported in this paper. In the one-step process, ferrocyanide salt and excess lime are added ahead of the filter, and the resulting filter cake solidifies by a pozzolanic reaction. The two-step process involves addition of solidifying agents subsequent to filtration. It was found that high surface area diatomaceous synthetic calcium silicate powders, sold commercially as functional fillers and carriers, adsorb nickel isotopes from solution at neutral and slightly basic pH. Addition of themore » silicates to cooling water allowed removal of the tested metal isotopes (nickel, iron, manganese, cobalt, and cesium) simultaneously at neutral to slightly basic pH. Lime to diatomite ratio was the most influential characteristic of composition on final strength tested, with higher lime ratios giving higher strength. Diatomaceous earth filter aids manufactured without sodium fluxes exhibited higher pozzolanic activity. Pozzolanic filter cake solidified with sodium silicate and a ratio of 0.45 parts lime to 1 part diatomite had compressive strength ranging from 470 to 595 psi at a 90% confidence level. Leachability indices of all tested metals in the solidified waste were acceptable. In light of the typical requirement of removing iron and desirability of control over process pH, a two-step process involving addition of Portland cement to the filter cake may be most generally applicable.« less
[Algorithms of artificial neural networks--practical application in medical science].
Stefaniak, Bogusław; Cholewiński, Witold; Tarkowska, Anna
2005-12-01
Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer applications of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. This paper presents practical aspects of scientific application of ANN in medicine using widely available algorithms. Several main steps of analysis with ANN were discussed starting from material selection and dividing it into groups, to the quality assessment of obtained results at the end. The most frequent, typical reasons for errors as well as the comparison of ANN method to the modeling by regression analysis were also described.
Chemicals and Structural Foams to Neutralize or Defeat Anti-Personnel Mines
1990-10-01
first-level goals in LD. This shows the basic approach used for this analysis. I OVERALL GOALi Select Best Foam System II Best Foam Product Best Delivery...pouring back and forth three times would have three steps for that part of the process, plus any other motions, such as pulling off the lid, and...i B-II I I I I I I :’½ j> I I I I 3 Typical Tilt-Rod AP Mine I I I I I I Typical Pull Firing Pin Device H I I I I I I I iPrsu -SniiePatcCsdMn I i
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liddell, Heather; Brueske, Sabine; Carpenter, Alberta
With their high strength-to-weight ratios, fiber-reinforced polymer (FRP) composites are important materials for lightweighting in structural applications; however, manufacturing challenges such as low process throughput and poor quality control can lead to high costs and variable performance, limiting their use in commercial applications. One of the most significant challenges for advanced composite materials is their high manufacturing energy intensity. This study explored the energy intensities of two lightweight FRP composite materials (glass- and carbon-fiber-reinforced polymers), with three lightweight metals (aluminum, magnesium, and titanium) and structural steel (as a reference material) included for comparison. Energy consumption for current typical and state-of-the-artmore » manufacturing processes were estimated for each material, deconstructing manufacturing process energy use by sub-process and manufacturing pathway in order to better understand the most energy intensive steps. Energy saving opportunities were identified and quantified for each production step based on a review of applied R&D technologies currently under development in order to estimate the practical minimum energy intensity. Results demonstrate that while carbon fiber reinforced polymer (CFRP) composites have the highest current manufacturing energy intensity of all materials considered, the large differences between current typical and state-of-the-art energy intensity levels (the 'current opportunity') and between state-of-the-art and practical minimum energy intensity levels (the 'R&D opportunity') suggest that large-scale energy savings are within reach.« less
USDA-ARS?s Scientific Manuscript database
The science and practice of step counting in children (typically aged 6-11 years) and adolescents (typically aged 12-19 years) has evolved rapidly over a relatively brief period with the commercial availability of research-grade pedometers and accelerometers. Recent reviews have summarized considera...
Direct, enantioselective α-alkylation of aldehydes using simple olefins.
Capacci, Andrew G; Malinowski, Justin T; McAlpine, Neil J; Kuhne, Jerome; MacMillan, David W C
2017-11-01
Although the α-alkylation of ketones has already been established, the analogous reaction using aldehyde substrates has proven surprisingly elusive. Despite the structural similarities between the two classes of compounds, the sensitivity and unique reactivity of the aldehyde functionality has typically required activated substrates or specialized additives. Here, we show that the synergistic merger of three catalytic processes-photoredox, enamine and hydrogen-atom transfer (HAT) catalysis-enables an enantioselective α-aldehyde alkylation reaction that employs simple olefins as coupling partners. Chiral imidazolidinones or prolinols, in combination with a thiophenol, iridium photoredox catalyst and visible light, have been successfully used in a triple catalytic process that is temporally sequenced to deliver a new hydrogen and electron-borrowing mechanism. This multicatalytic process enables both intra- and intermolecular aldehyde α-methylene coupling with olefins to construct both cyclic and acyclic products, respectively. With respect to atom and step-economy ideals, this stereoselective process allows the production of high-value molecules from feedstock chemicals in one step while consuming only photons.
Intelligent monitoring and control of semiconductor manufacturing equipment
NASA Technical Reports Server (NTRS)
Murdock, Janet L.; Hayes-Roth, Barbara
1991-01-01
The use of AI methods to monitor and control semiconductor fabrication in a state-of-the-art manufacturing environment called the Rapid Thermal Multiprocessor is described. Semiconductor fabrication involves many complex processing steps with limited opportunities to measure process and product properties. By applying additional process and product knowledge to that limited data, AI methods augment classical control methods by detecting abnormalities and trends, predicting failures, diagnosing, planning corrective action sequences, explaining diagnoses or predictions, and reacting to anomalous conditions that classical control systems typically would not correct. Research methodology and issues are discussed, and two diagnosis scenarios are examined.
RN, CIO: an executive informatics career.
Staggers, Nancy; Lasome, Caterina E M
2005-01-01
The Chief Information Officer (CIO) position is a viable new career track for clinical informaticists. Nurses, especially informatics nurses, are uniquely positioned for the CIO role because of their operational knowledge of clinical processes, communication skills, systems thinking abilities, and knowledge about information structures and processes. This article describes essential knowledge and skills for the CIO executive position. Competencies not typical to nurses can be learned and developed, particularly strategic visioning and organizational finesse. This article concludes by describing career development steps toward the CIO position: leadership and management; healthcare operations; organizational finesse; and informatics knowledge, processes, methods, and structures.
Schulze, H Georg; Turner, Robin F B
2015-06-01
High-throughput information extraction from large numbers of Raman spectra is becoming an increasingly taxing problem due to the proliferation of new applications enabled using advances in instrumentation. Fortunately, in many of these applications, the entire process can be automated, yielding reproducibly good results with significant time and cost savings. Information extraction consists of two stages, preprocessing and analysis. We focus here on the preprocessing stage, which typically involves several steps, such as calibration, background subtraction, baseline flattening, artifact removal, smoothing, and so on, before the resulting spectra can be further analyzed. Because the results of some of these steps can affect the performance of subsequent ones, attention must be given to the sequencing of steps, the compatibility of these sequences, and the propensity of each step to generate spectral distortions. We outline here important considerations to effect full automation of Raman spectral preprocessing: what is considered full automation; putative general principles to effect full automation; the proper sequencing of processing and analysis steps; conflicts and circularities arising from sequencing; and the need for, and approaches to, preprocessing quality control. These considerations are discussed and illustrated with biological and biomedical examples reflecting both successful and faulty preprocessing.
Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean
2014-01-01
Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT). © 2014 American Institute of Chemical Engineers.
Horsch, Salome; Kopczynski, Dominik; Kuthe, Elias; Baumbach, Jörg Ingo; Rahmann, Sven
2017-01-01
Motivation Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column—ion mobility spectrometry (MCC-IMS) is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process. Method We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios. Results The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology. PMID:28910313
Gallium arsenide processing for gate array logic
NASA Technical Reports Server (NTRS)
Cole, Eric D.
1989-01-01
The development of a reliable and reproducible GaAs process was initiated for applications in gate array logic. Gallium Arsenide is an extremely important material for high speed electronic applications in both digital and analog circuits since its electron mobility is 3 to 5 times that of silicon, this allows for faster switching times for devices fabricated with it. Unfortunately GaAs is an extremely difficult material to process with respect to silicon and since it includes the arsenic component GaAs can be quite dangerous (toxic) especially during some heating steps. The first stage of the research was directed at developing a simple process to produce GaAs MESFETs. The MESFET (MEtal Semiconductor Field Effect Transistor) is the most useful, practical and simple active device which can be fabricated in GaAs. It utilizes an ohmic source and drain contact separated by a Schottky gate. The gate width is typically a few microns. Several process steps were required to produce a good working device including ion implantation, photolithography, thermal annealing, and metal deposition. A process was designed to reduce the total number of steps to a minimum so as to reduce possible errors. The first run produced no good devices. The problem occurred during an aluminum etch step while defining the gate contacts. It was found that the chemical etchant attacked the GaAs causing trenching and subsequent severing of the active gate region from the rest of the device. Thus all devices appeared as open circuits. This problem is being corrected and since it was the last step in the process correction should be successful. The second planned stage involves the circuit assembly of the discrete MESFETs into logic gates for test and analysis. Finally the third stage is to incorporate the designed process with the tested circuit in a layout that would produce the gate array as a GaAs integrated circuit.
Study of Ion Beam Forming Process in Electric Thruster Using 3D FEM Simulation
NASA Astrophysics Data System (ADS)
Huang, Tao; Jin, Xiaolin; Hu, Quan; Li, Bin; Yang, Zhonghai
2015-11-01
There are two algorithms to simulate the process of ion beam forming in electric thruster. The one is electrostatic steady state algorithm. Firstly, an assumptive surface, which is enough far from the accelerator grids, launches the ion beam. Then the current density is calculated by theory formula. Secondly these particles are advanced one by one according to the equations of the motions of ions until they are out of the computational region. Thirdly, the electrostatic potential is recalculated and updated by solving Poisson Equation. At the end, the convergence is tested to determine whether the calculation should continue. The entire process will be repeated until the convergence is reached. Another one is time-depended PIC algorithm. In a global time step, we assumed that some new particles would be produced in the simulation domain and its distribution of position and velocity were certain. All of the particles that are still in the system will be advanced every local time steps. Typically, we set the local time step low enough so that the particle needs to be advanced about five times to move the distance of the edge of the element in which the particle is located.
Isolation and Purification of Biotechnological Products
NASA Astrophysics Data System (ADS)
Hubbuch, Jürgen; Kula, Maria-Regina
2007-05-01
The production of modern pharma proteins is one of the most rapid growing fields in biotechnology. The overall development and production is a complex task ranging from strain development and cultivation to the purification and formulation of the drug. Downstream processing, however, still accounts for the major part of production costs. This is mainly due to the high demands on purity and thus safety of the final product and results in processes with a sequence of typically more than 10 unit operations. Consequently, even if each process step would operate at near optimal yield, a very significant amount of product would be lost. The majority of unit operations applied in downstream processing have a long history in the field of chemical and process engineering; nevertheless, mathematical descriptions of the respective processes and the economical large-scale production of modern pharmaceutical products are hampered by the complexity of the biological feedstock, especially the high molecular weight and limited stability of proteins. In order to develop new operational steps as well as a successful overall process, it is thus a necessary prerequisite to develop a deeper understanding of the thermodynamics and physics behind the applied processes as well as the implications for the product.
Collins, John P.; Way, J. Douglas
1995-09-19
A hydrogen-selective membrane comprises a tubular porous ceramic support having a palladium metal layer deposited on an inside surface of the ceramic support. The thickness of the palladium layer is greater than about 10 .mu.m but typically less than about 20 .mu.m. The hydrogen permeation rate of the membrane is greater than about 1.0 moles/m.sup.2.s at a temperature of greater than about 500.degree. C. and a transmembrane pressure difference of about 1,500 kPa. Moreover, the hydrogen-to-nitrogen selectivity is greater than about 600 at a temperature of greater than about 500.degree. C. and a transmembrane pressure of about 700 kPa. Hydrogen can be separated from a mixture of gases using the membrane. The method may include the step of heating the mixture of gases to a temperature of greater than about 400.degree. C. and less than about 1000.degree. C. before the step of flowing the mixture of gases past the membrane. The mixture of gases may include ammonia. The ammonia typically is decomposed to provide nitrogen and hydrogen using a catalyst such as nickel. The catalyst may be placed inside the tubular ceramic support. The mixture of gases may be supplied by an industrial process such as the mixture of exhaust gases from the IGCC process.
Collins, J.P.; Way, J.D.
1995-09-19
A hydrogen-selective membrane comprises a tubular porous ceramic support having a palladium metal layer deposited on an inside surface of the ceramic support. The thickness of the palladium layer is greater than about 10 {micro}m but typically less than about 20 {micro}m. The hydrogen permeation rate of the membrane is greater than about 1.0 moles/m{sup 2}s at a temperature of greater than about 500 C and a transmembrane pressure difference of about 1,500 kPa. Moreover, the hydrogen-to-nitrogen selectivity is greater than about 600 at a temperature of greater than about 500 C and a transmembrane pressure of about 700 kPa. Hydrogen can be separated from a mixture of gases using the membrane. The method may include the step of heating the mixture of gases to a temperature of greater than about 400 C and less than about 1000 C before the step of flowing the mixture of gases past the membrane. The mixture of gases may include ammonia. The ammonia typically is decomposed to provide nitrogen and hydrogen using a catalyst such as nickel. The catalyst may be placed inside the tubular ceramic support. The mixture of gases may be supplied by an industrial process such as the mixture of exhaust gases from the IGCC process. 9 figs.
Collins, J.P.; Way, J.D.
1997-07-29
A hydrogen-selective membrane comprises a tubular porous ceramic support having a palladium metal layer deposited on an inside surface of the ceramic support. The thickness of the palladium layer is greater than about 10 {micro}m but typically less than about 20 {micro}m. The hydrogen permeation rate of the membrane is greater than about 1.0 moles/m{sup 2} s at a temperature of greater than about 500 C and a transmembrane pressure difference of about 1,500 kPa. Moreover, the hydrogen-to-nitrogen selectivity is greater than about 600 at a temperature of greater than about 500 C and a transmembrane pressure of about 700 kPa. Hydrogen can be separated from a mixture of gases using the membrane. The method may include the step of heating the mixture of gases to a temperature of greater than about 400 C and less than about 1000 C before the step of flowing the mixture of gases past the membrane. The mixture of gases may include ammonia. The ammonia typically is decomposed to provide nitrogen and hydrogen using a catalyst such as nickel. The catalyst may be placed inside the tubular ceramic support. The mixture of gases may be supplied by an industrial process such as the mixture of exhaust gases from the IGCC process. 9 figs.
Collins, John P.; Way, J. Douglas
1997-01-01
A hydrogen-selective membrane comprises a tubular porous ceramic support having a palladium metal layer deposited on an inside surface of the ceramic support. The thickness of the palladium layer is greater than about 10 .mu.m but typically less than about 20 .mu.m. The hydrogen permeation rate of the membrane is greater than about 1.0 moles/m.sup.2. s at a temperature of greater than about 500.degree. C. and a transmembrane pressure difference of about 1,500 kPa. Moreover, the hydrogen-to-nitrogen selectivity is greater than about 600 at a temperature of greater than about 500.degree. C. and a transmembrane pressure of about 700 kPa. Hydrogen can be separated from a mixture of gases using the membrane. The method may include the step of heating the mixture of gases to a temperature of greater than about 400.degree. C. and less than about 1000.degree. C. before the step of flowing the mixture of gases past the membrane. The mixture of gases may include ammonia. The ammonia typically is decomposed to provide nitrogen and hydrogen using a catalyst such as nickel. The catalyst may be placed inside the tubular ceramic support. The mixture of gases may be supplied by an industrial process such as the mixture of exhaust gases from the IGCC process.
NASA Astrophysics Data System (ADS)
Hartmann, J. M.; Veillerot, M.; Prévitali, B.
2017-10-01
We have compared co-flow and cyclic deposition/etch processes for the selective epitaxial growth of Si:P layers. High growth rates, relatively low resistivities and significant amounts of tensile strain (up to 10 nm min-1, 0.55 mOhm cm and a strain equivalent to 1.06% of substitutional C in Si:C layers) were obtained at 700 °C, 760 Torr with a co-flow approach and a SiH2Cl2 + PH3 + HCl chemistry. This approach was successfully used to thicken the sources and drains regions of n-type fin-shaped Field Effect Transistors. Meanwhile, the (Si2H6 + PH3/HCl + GeH4) CDE process evaluated yielded at 600 °C, 80 Torr even lower resistivities (0.4 mOhm cm, typically), at the cost however of the tensile strain which was lost due to (i) the incorporation of Ge atoms (1.5%, typically) into the lattice during the selective etch steps and (ii) a reduction by a factor of two of the P atomic concentration in CDE layers compared to that in layers grown in a single step (5 × 1020 cm-3 compared to 1021 cm-3).
Magnetically Enhanced Solid-Liquid Separation
NASA Astrophysics Data System (ADS)
Rey, C. M.; Keller, K.; Fuchs, B.
2005-07-01
DuPont is developing an entirely new method of solid-liquid filtration involving the use of magnetic fields and magnetic field gradients. The new hybrid process, entitled Magnetically Enhanced Solid-Liquid Separation (MESLS), is designed to improve the de-watering kinetics and reduce the residual moisture content of solid particulates mechanically separated from liquid slurries. Gravitation, pressure, temperature, centrifugation, and fluid dynamics have dictated traditional solid-liquid separation for the past 50 years. The introduction of an external field (i.e. the magnetic field) offers the promise to manipulate particle behavior in an entirely new manner, which leads to increased process efficiency. Traditional solid-liquid separation typically consists of two primary steps. The first is a mechanical step in which the solid particulate is separated from the liquid using e.g. gas pressure through a filter membrane, centrifugation, etc. The second step is a thermal drying process, which is required due to imperfect mechanical separation. The thermal drying process is over 100-200 times less energy efficient than the mechanical step. Since enormous volumes of materials are processed each year, more efficient mechanical solid-liquid separations can be leveraged into dramatic reductions in overall energy consumption by reducing downstream drying requirements have a tremendous impact on energy consumption. Using DuPont's MESLS process, initial test results showed four very important effects of the magnetic field on the solid-liquid filtration process: 1) reduction of the time to reach gas breakthrough, 2) less loss of solid into the filtrate, 3) reduction of the (solids) residual moisture content, and 4) acceleration of the de-watering kinetics. These test results and their potential impact on future commercial solid-liquid filtration is discussed. New applications can be found in mining, chemical and bioprocesses.
Fast automatic delineation of cardiac volume of interest in MSCT images
NASA Astrophysics Data System (ADS)
Lorenz, Cristian; Lessick, Jonathan; Lavi, Guy; Bulow, Thomas; Renisch, Steffen
2004-05-01
Computed Tomography Angiography (CTA) is an emerging modality for assessing cardiac anatomy. The delineation of the cardiac volume of interest (VOI) is a pre-processing step for subsequent visualization or image processing. It serves the suppression of anatomic structures being not in the primary focus of the cardiac application, such as sternum, ribs, spinal column, descending aorta and pulmonary vasculature. These structures obliterate standard visualizations such as direct volume renderings or maximum intensity projections. In addition, outcome and performance of post-processing steps such as ventricle suppression, coronary artery segmentation or the detection of short and long axes of the heart can be improved. The structures being part of the cardiac VOI (coronary arteries and veins, myocardium, ventricles and atria) differ tremendously in appearance. In addition, there is no clear image feature associated with the contour (or better cut-surface) distinguishing between cardiac VOI and surrounding tissue making the automatic delineation of the cardiac VOI a difficult task. The presented approach locates in a first step chest wall and descending aorta in all image slices giving a rough estimate of the location of the heart. In a second step, a Fourier based active contour approach delineates slice-wise the border of the cardiac VOI. The algorithm has been evaluated on 41 multi-slice CT data-sets including cases with coronary stents and venous and arterial bypasses. The typical processing time amounts to 5-10s on a 1GHz P3 PC.
From Sequences to Insights in Microbial Ecology
Knight, R.
2010-01-01
s4-3 Rapid declines in the cost of sequencing have made large volumes of DNA sequence data available to individual investigators. Now, data analysis is the rate-limiting step: providing a user with sequences alone typically leads to bewilderment, frustration, and skepticism about the technology. In this talk, I focus on how to extract insights from 16S rRNA data, including key lab steps (barcoding and normalization) and on which tools are available to perform routine but essential processing steps such as denoising, chimera detection, taxonomy assignment, and diversity analyses (including detection of biological clusters and gradients in the samples). Providing users with advice on these points and with a standard pipeline they can exploit (but modify if circumstances require) can greatly accelerate the rate of understanding, publication, and acquisition of funding for further studies.
Course constructions: A case-base of forensic toxicology.
Zhou, Nan; Wu, Yeda; Su, Terry; Zhang, Liyong; Yin, Kun; Zheng, Da; Zheng, Jingjing; Huang, Lei; Wu, Qiuping; Cheng, Jianding
2017-08-01
Forensic toxicology education in China is limited by insufficient teaching methods and resources, resulting in students with adequate theoretical principles but lacking practice experience. Typical cases used as teaching materials vividly represent intoxication and provide students with an opportunity to practice and hone resolving skills. In 2013, the Department of Forensic Pathology at Zhongshan School of Medicine began to construct top-quality courses in forensic toxicology, with its first step, creating a base containing typical cases of intoxication. This essay reviews the construction process of said cases-base, which is intended to set an example of forensic toxicology education. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Direct, enantioselective α-alkylation of aldehydes using simple olefins
Capacci, Andrew G.; Malinowski, Justin T.; McAlpine, Neil J.; Kuhne, Jerome; MacMillan, David W. C.
2017-01-01
Although the α-alkylation of ketones has already been established, the analogous reaction using aldehyde substrates has proven surprisingly elusive. Despite the structural similarities between the two classes of compounds, the sensitivity and unique reactivity of the aldehyde functionality has typically required activated substrates or specialized additives. Here, we show that the synergistic merger of three catalytic processes—photoredox, enamine and hydrogen-atom transfer (HAT) catalysis—enables an enantioselective α-aldehyde alkylation reaction that employs simple olefins as coupling partners. Chiral imidazolidinones or prolinols, in combination with a thiophenol, iridium photoredox catalyst and visible light, have been successfully used in a triple catalytic process that is temporally sequenced to deliver a new hydrogen and electron-borrowing mechanism. This multicatalytic process enables both intra- and intermolecular aldehyde α-methylene coupling with olefins to construct both cyclic and acyclic products, respectively. With respect to atom and step-economy ideals, this stereoselective process allows the production of high-value molecules from feedstock chemicals in one step while consuming only photons. PMID:29064486
Direct, enantioselective α-alkylation of aldehydes using simple olefins
NASA Astrophysics Data System (ADS)
Capacci, Andrew G.; Malinowski, Justin T.; McAlpine, Neil J.; Kuhne, Jerome; MacMillan, David W. C.
2017-11-01
Although the α-alkylation of ketones has already been established, the analogous reaction using aldehyde substrates has proven surprisingly elusive. Despite the structural similarities between the two classes of compounds, the sensitivity and unique reactivity of the aldehyde functionality has typically required activated substrates or specialized additives. Here, we show that the synergistic merger of three catalytic processes—photoredox, enamine and hydrogen-atom transfer (HAT) catalysis—enables an enantioselective α-aldehyde alkylation reaction that employs simple olefins as coupling partners. Chiral imidazolidinones or prolinols, in combination with a thiophenol, iridium photoredox catalyst and visible light, have been successfully used in a triple catalytic process that is temporally sequenced to deliver a new hydrogen and electron-borrowing mechanism. This multicatalytic process enables both intra- and intermolecular aldehyde α-methylene coupling with olefins to construct both cyclic and acyclic products, respectively. With respect to atom and step-economy ideals, this stereoselective process allows the production of high-value molecules from feedstock chemicals in one step while consuming only photons.
NASA Technical Reports Server (NTRS)
Stephens, J. R.; Tien, J. K.
1983-01-01
A typical innovation-to-commercialization process for the development of a new hot section gas turbine material requires one to two decades with attendant costs in the tens of millions of dollars. This transfer process is examined to determine the potential rate-controlling steps for introduction of future low strategic metal content alloys or processes. Case studies are used to highlight the barriers to commercialization as well as to identify the means by which these barriers can be surmounted. The opportunities for continuing joint government-university-industry partnerships in planning and conducting strategic materials R&D programs are also discussed.
Computer-Aided Diagnosis Systems for Lung Cancer: Challenges and Methodologies
El-Baz, Ayman; Beache, Garth M.; Gimel'farb, Georgy; Suzuki, Kenji; Okada, Kazunori; Elnakib, Ahmed; Soliman, Ahmed; Abdollahi, Behnoush
2013-01-01
This paper overviews one of the most important, interesting, and challenging problems in oncology, the problem of lung cancer diagnosis. Developing an effective computer-aided diagnosis (CAD) system for lung cancer is of great clinical importance and can increase the patient's chance of survival. For this reason, CAD systems for lung cancer have been investigated in a huge number of research studies. A typical CAD system for lung cancer diagnosis is composed of four main processing steps: segmentation of the lung fields, detection of nodules inside the lung fields, segmentation of the detected nodules, and diagnosis of the nodules as benign or malignant. This paper overviews the current state-of-the-art techniques that have been developed to implement each of these CAD processing steps. For each technique, various aspects of technical issues, implemented methodologies, training and testing databases, and validation methods, as well as achieved performances, are described. In addition, the paper addresses several challenges that researchers face in each implementation step and outlines the strengths and drawbacks of the existing approaches for lung cancer CAD systems. PMID:23431282
Fast Enzymatic Processing of Proteins for MS Detection with a Flow-through Microreactor
Lazar, Iulia M.; Deng, Jingren; Smith, Nicole
2016-01-01
The vast majority of mass spectrometry (MS)-based protein analysis methods involve an enzymatic digestion step prior to detection, typically with trypsin. This step is necessary for the generation of small molecular weight peptides, generally with MW < 3,000-4,000 Da, that fall within the effective scan range of mass spectrometry instrumentation. Conventional protocols involve O/N enzymatic digestion at 37 ºC. Recent advances have led to the development of a variety of strategies, typically involving the use of a microreactor with immobilized enzymes or of a range of complementary physical processes that reduce the time necessary for proteolytic digestion to a few minutes (e.g., microwave or high-pressure). In this work, we describe a simple and cost-effective approach that can be implemented in any laboratory for achieving fast enzymatic digestion of a protein. The protein (or protein mixture) is adsorbed on C18-bonded reversed-phase high performance liquid chromatography (HPLC) silica particles preloaded in a capillary column, and trypsin in aqueous buffer is infused over the particles for a short period of time. To enable on-line MS detection, the tryptic peptides are eluted with a solvent system with increased organic content directly in the MS ion source. This approach avoids the use of high-priced immobilized enzyme particles and does not necessitate any aid for completing the process. Protein digestion and complete sample analysis can be accomplished in less than ~3 min and ~30 min, respectively. PMID:27078683
Fast Enzymatic Processing of Proteins for MS Detection with a Flow-through Microreactor.
Lazar, Iulia M; Deng, Jingren; Smith, Nicole
2016-04-06
The vast majority of mass spectrometry (MS)-based protein analysis methods involve an enzymatic digestion step prior to detection, typically with trypsin. This step is necessary for the generation of small molecular weight peptides, generally with MW < 3,000-4,000 Da, that fall within the effective scan range of mass spectrometry instrumentation. Conventional protocols involve O/N enzymatic digestion at 37 ºC. Recent advances have led to the development of a variety of strategies, typically involving the use of a microreactor with immobilized enzymes or of a range of complementary physical processes that reduce the time necessary for proteolytic digestion to a few minutes (e.g., microwave or high-pressure). In this work, we describe a simple and cost-effective approach that can be implemented in any laboratory for achieving fast enzymatic digestion of a protein. The protein (or protein mixture) is adsorbed on C18-bonded reversed-phase high performance liquid chromatography (HPLC) silica particles preloaded in a capillary column, and trypsin in aqueous buffer is infused over the particles for a short period of time. To enable on-line MS detection, the tryptic peptides are eluted with a solvent system with increased organic content directly in the MS ion source. This approach avoids the use of high-priced immobilized enzyme particles and does not necessitate any aid for completing the process. Protein digestion and complete sample analysis can be accomplished in less than ~3 min and ~30 min, respectively.
Radically New Adsorption Cycles for Carbon Dioxide Sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
James A. Ritter; Armin D. Ebner; James A. McIntyre
2005-10-11
In Parts I and II of this project, a rigorous pressure swing adsorption (PSA) process simulator was used to study new, high temperature, PSA cycles, based on the use of a K-promoted HTlc adsorbent and 4- and 5-step (bed) vacuum swing PSA cycles, which were designed to process a typical stack gas effluent at 575 K containing (in vol%) 15 % CO{sub 2}, 75% N{sub 2} and 10% H{sub 2}O into a light product stream depleted of CO{sub 2} and a heavy product stream enriched in CO{sub 2}. Literally, thousands (2,850) of simulations were carried out to the periodic statemore » to study the effects of the light product purge to feed ratio ({gamma}), cycle step time (t{sub s}) or cycle time (t{sub c}), high to low pressure ratio ({pi}{sub T}), and heavy product recycle ratio (R{sub R}) on the process performance, while changing the cycle configuration from 4- to 5-step (bed) designs utilizing combinations of light and heavy reflux steps, two different depressurization modes, and two sets of CO{sub 2}-HTlc mass transfer coefficients. The process performance was judged in terms of the CO{sub 2} purity and recovery, and the feed throughput. The best process performance was obtained from a 5-step (bed) stripping PSA cycle with a light reflux step and a heavy reflux step (with the heavy reflux gas obtained from the low pressure purge step), with a CO{sub 2} purity of 78.9%, a CO{sub 2} recovery of 57.4%, and a throughput of 11.5 L STP/hr/kg. This performance improved substantially when the CO{sub 2}-HTlc adsorption and desorption mass transfer coefficients (uncertain quantities at this time) were increased by factors of five, with a CO{sub 2} purity of 90.3%, a CO{sub 2} recovery of 73.6%, and a throughput of 34.6 L STP/hr/kg. Overall, this preliminary study disclosed the importance of cycle configuration through the heavy and dual reflux concepts, and the importance of knowing well defined mass transfer coefficients to the performance of a high temperature PSA process for CO{sub 2} capture and concentration from flue and stack gases using an HTlc adsorbent. This study is continuing.« less
Multitarget detection algorithm for automotive FMCW radar
NASA Astrophysics Data System (ADS)
Hyun, Eugin; Oh, Woo-Jin; Lee, Jong-Hun
2012-06-01
Today, 77 GHz FMCW (Frequency Modulation Continuous Wave) radar has strong advantages of range and velocity detection for automotive applications. However, FMCW radar brings out ghost targets and missed targets in multi-target situations. In this paper, in order to resolve these limitations, we propose an effective pairing algorithm, which consists of two steps. In the proposed method, a waveform with different slopes in two periods is used. In the 1st pairing processing, all combinations of range and velocity are obtained in each of two wave periods. In the 2nd pairing step, using the results of the 1st pairing processing, fine range and velocity are detected. In that case, we propose the range-velocity windowing technique in order to compensate for the non-ideal beat-frequency characteristic that arises due to the non-linearity of the RF module. Based on experimental results, the performance of the proposed algorithm is improved compared with that of the typical method.
Pretreatment methods for bioethanol production.
Xu, Zhaoyang; Huang, Fang
2014-09-01
Lignocellulosic biomass, such as wood, grass, agricultural, and forest residues, are potential resources for the production of bioethanol. The current biochemical process of converting biomass to bioethanol typically consists of three main steps: pretreatment, enzymatic hydrolysis, and fermentation. For this process, pretreatment is probably the most crucial step since it has a large impact on the efficiency of the overall bioconversion. The aim of pretreatment is to disrupt recalcitrant structures of cellulosic biomass to make cellulose more accessible to the enzymes that convert carbohydrate polymers into fermentable sugars. This paper reviews several leading acidic, neutral, and alkaline pretreatments technologies. Different pretreatment methods, including dilute acid pretreatment (DAP), steam explosion pretreatment (SEP), organosolv, liquid hot water (LHW), ammonia fiber expansion (AFEX), soaking in aqueous ammonia (SAA), sodium hydroxide/lime pretreatments, and ozonolysis are intensively introduced and discussed. In this minireview, the key points are focused on the structural changes primarily in cellulose, hemicellulose, and lignin during the above leading pretreatment technologies.
Xu, Yan; Wu, Qian; Shimatani, Yuji; Yamaguchi, Koji
2015-10-07
Due to the lack of regeneration methods, the reusability of nanofluidic chips is a significant technical challenge impeding the efficient and economic promotion of both fundamental research and practical applications on nanofluidics. Herein, a simple method for the total regeneration of glass nanofluidic chips was described. The method consists of sequential thermal treatment with six well-designed steps, which correspond to four sequential thermal and thermochemical decomposition processes, namely, dehydration, high-temperature redox chemical reaction, high-temperature gasification, and cooling. The method enabled the total regeneration of typical 'dead' glass nanofluidic chips by eliminating physically clogged nanoparticles in the nanochannels, removing chemically reacted organic matter on the glass surface and regenerating permanent functional surfaces of dissimilar materials localized in the nanochannels. The method provides a technical solution to significantly improve the reusability of glass nanofluidic chips and will be useful for the promotion and acceleration of research and applications on nanofluidics.
The Overgrid Interface for Computational Simulations on Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; Kwak, Dochan (Technical Monitor)
2002-01-01
Computational simulations using overset grids typically involve multiple steps and a variety of software modules. A graphical interface called OVERGRID has been specially designed for such purposes. Data required and created by the different steps include geometry, grids, domain connectivity information and flow solver input parameters. The interface provides a unified environment for the visualization, processing, generation and diagnosis of such data. General modules are available for the manipulation of structured grids and unstructured surface triangulations. Modules more specific for the overset approach include surface curve generators, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, Cartesian box grid generators, and domain connectivity: pre-processing tools. An interface provides automatic selection and viewing of flow solver boundary conditions, and various other flow solver inputs. For problems involving multiple components in relative motion, a module is available to build the component/grid relationships and to prescribe and animate the dynamics of the different components.
Pretreatment of Cellulose By Electron Beam Irradiation Method
NASA Astrophysics Data System (ADS)
Jusri, N. A. A.; Azizan, A.; Ibrahim, N.; Salleh, R. Mohd; Rahman, M. F. Abd
2018-05-01
Pretreatment process of lignocellulosic biomass (LCB) to produce biofuel has been conducted by using various methods including physical, chemical, physicochemical as well as biological. The conversion of bioethanol process typically involves several steps which consist of pretreatment, hydrolysis, fermentation and separation. In this project, microcrystalline cellulose (MCC) was used in replacement of LCB since cellulose has the highest content of LCB for the purpose of investigating the effectiveness of new pretreatment method using radiation technology. Irradiation with different doses (100 kGy to 1000 kGy) was conducted by using electron beam accelerator equipment at Agensi Nuklear Malaysia. Fourier Transform Infrared Spectroscopy (FTIR) and X-Ray Diffraction (XRD) analyses were studied to further understand the effect of the suggested pretreatment step to the content of MCC. Through this method namely IRR-LCB, an ideal and optimal condition for pretreatment prior to the production of biofuel by using LCB may be introduced.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
A two-step initial mass function:. Consequences of clustered star formation for binary properties
NASA Astrophysics Data System (ADS)
Durisen, R. H.; Sterzik, M. F.; Pickett, B. K.
2001-06-01
If stars originate in transient bound clusters of moderate size, these clusters will decay due to dynamic interactions in which a hard binary forms and ejects most or all the other stars. When the cluster members are chosen at random from a reasonable initial mass function (IMF), the resulting binary characteristics do not match current observations. We find a significant improvement in the trends of binary properties from this scenario when an additional constraint is taken into account, namely that there is a distribution of total cluster masses set by the masses of the cloud cores from which the clusters form. Two distinct steps then determine final stellar masses - the choice of a cluster mass and the formation of the individual stars. We refer to this as a ``two-step'' IMF. Simple statistical arguments are used in this paper to show that a two-step IMF, combined with typical results from dynamic few-body system decay, tends to give better agreement between computed binary characteristics and observations than a one-step mass selection process.
Biophysical Interactions within Step-Pool Mountain Streams Following Wildfire
NASA Astrophysics Data System (ADS)
Parker, A.; Chin, A.; O'Dowd, A. P.
2014-12-01
Recovery of riverine ecosystems following disturbance is driven by a variety of interacting processes. Wildfires pose increasing disturbances to riverine landscapes, with rising frequencies and magnitudes owing to warming climates and increased fuel loads. The effects of wildfire include loss of vegetation, elevated runoff and flash floods, erosion and deposition, and changing biological habitats and communities. Understanding process interactions in post-fire landscapes is increasingly urgent for successful management and restoration of affected ecosystems. In steep channels, steps and pools provide prominent habitats for organisms and structural integrity in high energy environments. Step-pools are typically stable, responding to extreme events with recurrence intervals often exceeding 50 years. Once wildfire occurs, however, intensification of post-fire flood events can potentially overpower the inherent stability of these systems, with significant consequences for aquatic life and human well-being downstream. This study examined the short-term response of step-pool streams following the 2012 Waldo Canyon Fire in Colorado. We explored interacting feedbacks among geomorphology, hydrology, and ecology in the post-fire environment. At selected sites with varying burn severity, we established baseline conditions immediately after the fire with channel surveys, biological assessment using benthic macroinvertebrates, sediment analysis including pebble counts, and precipitation gauging. Repeat measurements after major storm events over several years enabled analysis of the interacting feedbacks among post-fire processes. We found that channels able to retain the step-pool structure changed less and facilitated recovery more readily. Step habitats maintained higher percentages of sensitive macroinvertebrate taxa compared to pools through post-fire floods. Sites burned with high severity experienced greater reduction in the percentage of sensitive taxa. The decimation of macroinvertebrates closely coincides with the physical destruction of the step-pool morphology. The role that step-pools play in enhancing the ecological quality of fluvial systems, therefore, provides a key focus for effective management and restoration of aquatic resources following wildfires.
Literacy learning in users of AAC: A neurocognitive perspective.
Van Balkom, Hans; Verhoeven, Ludo
2010-09-01
The understanding of written or printed text or discourse - depicted either in orthographical, graphic-visual or tactile symbols - calls upon both bottom-up word recognition processes and top-down comprehension processes. Different architectures have been proposed to account for literacy processes. Research has shown that the first steps in perceiving, processing and deriving conceptual meaning from words, graphic symbols, manual signs, and co-speech gestures or tactile manual signing and tangible symbols can be seen as identical and collectively (sub)activated. Results from recent brain research and neurolinguistics have revealed new insights in the reading process of typical and atypical readers and may provide verifiable evidence for improved literacy assessment and the validation of early intervention programs for AAC users.
Lin, Johnson; Sharma, Vikas; Milase, Ridwaan; Mbhense, Ntuthuko
2016-06-01
Phenol degradation enhancement of Acinetobacter strain V2 by a step-wise continuous acclimation process was investigated. At the end of 8 months, three stable adapted strains, designated as R, G, and Y, were developed with the sub-lethal concentration of phenol at 800, 1100, and 1400 mg/L, respectively, from 400 mg/L of V2 parent strain. All strains degraded phenol at their sub-lethal level within 24 h, their growth rate increased as the acclimation process continued and retained their degradation properties even after storing at -80 °C for more than 3 years. All adapted strains appeared coccoid with an ungranulated surface under electron microscope compared to typical rod-shaped parental strain V2 . The adapted Y strain also possessed superior degradation ability against aniline, benzoate, and toluene. This study demonstrated the use of long term acclimation process to develop efficient and better pollutant degrading bacterial strains with potentials in industrial and environmental bioremediation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
How smart is your BEOL? productivity improvement through intelligent automation
NASA Astrophysics Data System (ADS)
Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony
2017-07-01
The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To accommodate for the variability and complexity in mask shops today, individual workflows can be supported according to the needs of any particular manufacturing line with respect to necessary measurement and production steps. At the same time the efficiency of assets is increased by avoiding unneeded cycle time and waste of resources due to the presence of process steps that are very crucial for a given technology. In this paper we present details of which areas of the BEOL can benefit most from intelligent automation, what solutions exist and the quantification of benefits to a mask shop with full automation by the use of a back end of line model.
Adaptive color demosaicing and false color removal
NASA Astrophysics Data System (ADS)
Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria
2010-04-01
Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.
Mushroom-free selective epitaxial growth of Si, SiGe and SiGe:B raised sources and drains
NASA Astrophysics Data System (ADS)
Hartmann, J. M.; Benevent, V.; Barnes, J. P.; Veillerot, M.; Lafond, D.; Damlencourt, J. F.; Morvan, S.; Prévitali, B.; Andrieu, F.; Loubet, N.; Dutartre, D.
2013-05-01
We have evaluated various Cyclic Selective Epitaxial Growth/Etch (CSEGE) processes in order to grow "mushroom-free" Si and SiGe:B Raised Sources and Drains (RSDs) on each side of ultra-short gate length Extra-Thin Silicon-On-Insulator (ET-SOI) transistors. The 750 °C, 20 Torr Si CSEGE process we have developed (5 chlorinated growth steps with four HCl etch steps in-between) yielded excellent crystalline quality, typically 18 nm thick Si RSDs. Growth was conformal along the Si3N4 sidewall spacers, without any poly-Si mushrooms on top of unprotected gates. We have then evaluated on blanket 300 mm Si(001) wafers the feasibility of a 650 °C, 20 Torr SiGe:B CSEGE process (5 chlorinated growth steps with four HCl etch steps in-between, as for Si). As expected, the deposited thickness decreased as the total HCl etch time increased. This came hands in hands with unforeseen (i) decrease of the mean Ge concentration (from 30% down to 26%) and (ii) increase of the substitutional B concentration (from 2 × 1020 cm-3 up to 3 × 1020 cm-3). They were due to fluctuations of the Ge concentration and of the atomic B concentration [B] in such layers (drop of the Ge% and increase of [B] at etch step locations). Such blanket layers were a bit rougher than layers grown using a single epitaxy step, but nevertheless of excellent crystalline quality. Transposition of our CSEGE process on patterned ET-SOI wafers did not yield the expected results. HCl etch steps indeed helped in partly or totally removing the poly-SiGe:B mushrooms on top of the gates. This was however at the expense of the crystalline quality and 2D nature of the ˜45 nm thick Si0.7Ge0.3:B recessed sources and drains selectively grown on each side of the imperfectly protected poly-Si gates. The only solution we have so far identified that yields a lesser amount of mushrooms while preserving the quality of the S/D is to increase the HCl flow during growth steps.
Plant genome and transcriptome annotations: from misconceptions to simple solutions
Bolger, Marie E; Arsova, Borjana; Usadel, Björn
2018-01-01
Abstract Next-generation sequencing has triggered an explosion of available genomic and transcriptomic resources in the plant sciences. Although genome and transcriptome sequencing has become orders of magnitudes cheaper and more efficient, often the functional annotation process is lagging behind. This might be hampered by the lack of a comprehensive enumeration of simple-to-use tools available to the plant researcher. In this comprehensive review, we present (i) typical ontologies to be used in the plant sciences, (ii) useful databases and resources used for functional annotation, (iii) what to expect from an annotated plant genome, (iv) an automated annotation pipeline and (v) a recipe and reference chart outlining typical steps used to annotate plant genomes/transcriptomes using publicly available resources. PMID:28062412
Dizon-Maspat, Jemelle; Bourret, Justin; D'Agostini, Anna; Li, Feng
2012-04-01
As the therapeutic monoclonal antibody (mAb) market continues to grow, optimizing production processes is becoming more critical in improving efficiencies and reducing cost-of-goods in large-scale production. With the recent trends of increasing cell culture titers from upstream process improvements, downstream capacity has become the bottleneck in many existing manufacturing facilities. Single Pass Tangential Flow Filtration (SPTFF) is an emerging technology, which is potentially useful in debottlenecking downstream capacity, especially when the pool tank size is a limiting factor. It can be integrated as part of an existing purification process, after a column chromatography step or a filtration step, without introducing a new unit operation. In this study, SPTFF technology was systematically evaluated for reducing process intermediate volumes from 2× to 10× with multiple mAbs and the impact of SPTFF on product quality, and process yield was analyzed. Finally, the potential fit into the typical 3-column industry platform antibody purification process and its implementation in a commercial scale manufacturing facility were also evaluated. Our data indicate that using SPTFF to concentrate protein pools is a simple, flexible, and robust operation, which can be implemented at various scales to improve antibody purification process capacity. Copyright © 2011 Wiley Periodicals, Inc.
Physical modeling of stepped spillways
USDA-ARS?s Scientific Manuscript database
Stepped spillways applied to embankment dams are becoming popular for addressing the rehabilitation of aging watershed dams, especially those situated in the urban landscape. Stepped spillways are typically placed over the existing embankment, which provides for minimal disturbance to the original ...
NASA Astrophysics Data System (ADS)
Yoo, Seung Hwa; Joh, Han-Ik; Lee, Sungho
2017-04-01
Porous carbon nanofibers (PCNFs) with CNF branches (PCNF/bCNF) were synthesized by a simple heat treatment method. Conventional methods to synthesize this unique structure usually follow a typical route, which consists of CNF preparation, catalyst deposition, and secondary CNF growth. In contrast, our method utilized a one-step carbonization process of polymer nanofibers, which were electrospun from a one-pot solution consisted of polyacrylonitrile, polystyrene (PS), and iron acetylacetonate. Various structures of PCNF/CNF were synthesized by changing the solution composition and molecular weight of PS. It was verified that the content and molecular weight of PS were critical for the growth of catalyst particles and subsequent growth of CNF branches. The morphology, phase of catalyst, and carbon structure of PCNF/bCNF were analyzed at different temperature steps during carbonization. It was found that pores were generated by the evaporation of PS and the catalyst particles were formed on the surface of PCNF at 700 °C. The gases originated from the evaporation of PS acted as a carbon source for the growth of CNF branches that started at 900 °C. Finally, when the carbonization process was finished at 1200 °C, uniform and abundant CNF branches were formed on the surface of PCNF.
Ultrafast Digital Printing toward 4D Shape Changing Materials.
Huang, Limei; Jiang, Ruiqi; Wu, Jingjun; Song, Jizhou; Bai, Hao; Li, Bogeng; Zhao, Qian; Xie, Tao
2017-02-01
Ultrafast 4D printing (<30 s) of responsive polymers is reported. Visible-light-triggered polymerization of commercial monomers defines digitally stress distribution in a 2D polymer film. Releasing the stress after the printing converts the structure into 3D. An additional dimension can be incorporated by choosing the printing precursors. The process overcomes the speed limiting steps of typical 3D (4D) printing. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ADVANCED SULFUR CONTROL CONCEPTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apostolos A. Nikolopoulos; Santosh K. Gangwal; William J. McMichael
Conventional sulfur removal in integrated gasification combined cycle (IGCC) power plants involves numerous steps: COS (carbonyl sulfide) hydrolysis, amine scrubbing/regeneration, Claus process, and tail-gas treatment. Advanced sulfur removal in IGCC systems involves typically the use of zinc oxide-based sorbents. The sulfides sorbent is regenerated using dilute air to produce a dilute SO{sub 2} (sulfur dioxide) tail gas. Under previous contracts the highly effective first generation Direct Sulfur Recovery Process (DSRP) for catalytic reduction of this SO{sub 2} tail gas to elemental sulfur was developed. This process is currently undergoing field-testing. In this project, advanced concepts were evaluated to reduce themore » number of unit operations in sulfur removal and recovery. Substantial effort was directed towards developing sorbents that could be directly regenerated to elemental sulfur in an Advanced Hot Gas Process (AHGP). Development of this process has been described in detail in Appendices A-F. RTI began the development of the Single-step Sulfur Recovery Process (SSRP) to eliminate the use of sorbents and multiple reactors in sulfur removal and recovery. This process showed promising preliminary results and thus further process development of AHGP was abandoned in favor of SSRP. The SSRP is a direct Claus process that consists of injecting SO{sub 2} directly into the quenched coal gas from a coal gasifier, and reacting the H{sub 2}S-SO{sub 2} mixture over a selective catalyst to both remove and recover sulfur in a single step. The process is conducted at gasifier pressure and 125 to 160 C. The proposed commercial embodiment of the SSRP involves a liquid phase of molten sulfur with dispersed catalyst in a slurry bubble-column reactor (SBCR).« less
ERIC Educational Resources Information Center
Srisinghasongkram, Pornchada; Pruksananonda, Chandhita; Chonchaiya, Weerasak
2016-01-01
This study aimed to validate the use of two-step Modified Checklist for Autism in Toddlers (M-CHAT) screening adapted for a Thai population. Our participants included both high-risk children with language delay (N = 109) and low-risk children with typical development (N = 732). Compared with the critical scoring criteria, the total scoring method…
One-Step Synthesis of Monodisperse In-Doped ZnO Nanocrystals
NASA Astrophysics Data System (ADS)
Wang, Qing Ling; Yang, Ye Feng; He, Hai Ping; Chen, Dong Dong; Ye, Zhi Zhen; Jin, Yi Zheng
2010-05-01
A method for the synthesis of high quality indium-doped zinc oxide (In-doped ZnO) nanocrystals was developed using a one-step ester elimination reaction based on alcoholysis of metal carboxylate salts. The resulting nearly monodisperse nanocrystals are well-crystallized with typically crystal structure identical to that of wurtzite type of ZnO. Structural, optical, and elemental analyses on the products indicate the incorporation of indium into the host ZnO lattices. The individual nanocrystals with cubic structures were observed in the 5% In-ZnO reaction, due to the relatively high reactivity of indium precursors. Our study would provide further insights for the growth of doped oxide nanocrystals, and deepen the understanding of doping process in colloidal nanocrystal syntheses.
Kallscheuer, Nicolai; Polen, Tino; Bott, Michael; Marienhagen, Jan
2017-07-01
β-Oxidation is the ubiquitous metabolic strategy to break down fatty acids. In the course of this four-step process, two carbon atoms are liberated per cycle from the fatty acid chain in the form of acetyl-CoA. However, typical β-oxidative strategies are not restricted to monocarboxylic (fatty) acid degradation only, but can also be involved in the utilization of aromatic compounds, amino acids and dicarboxylic acids. Each enzymatic step of a typical β-oxidation cycle is reversible, offering the possibility to also take advantage of reversed metabolic pathways for applied purposes. In such cases, 3-oxoacyl-CoA thiolases, which catalyze the final chain-shortening step in the catabolic direction, mediate the condensation of an acyl-CoA starter molecule with acetyl-CoA in the anabolic direction. Subsequently, the carbonyl-group at C3 is stepwise reduced and dehydrated yielding a chain-elongated product. In the last years, several β-oxidation pathways have been studied in detail and reversal of these pathways already proved to be a promising strategy for the production of chemicals and polymer building blocks in several industrially relevant microorganisms. This review covers recent advancements in this field and discusses constraints and bottlenecks of this metabolic strategy in comparison to alternative production pathways. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Winters, Andrew C.
Careful observational work has demonstrated that the tropopause is typically characterized by a three-step pole-to-equator structure, with each break between steps in the tropopause height associated with a jet stream. While the two jet streams, the polar and subtropical jets, typically occupy different latitude bands, their separation can occasionally vanish, resulting in a vertical superposition of the two jets. A cursory examination of a number of historical and recent high-impact weather events over North America and the North Atlantic indicates that superposed jets can be an important component of their evolution. Consequently, this dissertation examines two recent jet superposition cases, the 18--20 December 2009 Mid-Atlantic Blizzard and the 1--3 May 2010 Nashville Flood, in an effort (1) to determine the specific influence that a superposed jet can have on the development of a high-impact weather event and (2) to illuminate the processes that facilitated the production of a superposition in each case. An examination of these cases from a basic-state variable and PV inversion perspective demonstrates that elements of both the remote and local synoptic environment are important to consider while diagnosing the development of a jet superposition. Specifically, the process of jet superposition begins with the remote production of a cyclonic (anticyclonic) tropopause disturbance at high (low) latitudes. The cyclonic circulation typically originates at polar latitudes, while organized tropical convection can encourage the development of an anticyclonic circulation anomaly within the tropical upper-troposphere. The concurrent advection of both anomalies towards middle latitudes subsequently allows their individual circulations to laterally displace the location of the individual tropopause breaks. Once the two circulation anomalies position the polar and subtropical tropopause breaks in close proximity to one another, elements within the local environment, such as proximate convection or transverse vertical circulations, can work to further deform the tropopause and to aid in the production of the two-step tropopause structure characteristic of a superposed jet. The analysis also demonstrates that the intensified transverse vertical circulation that accompanies a superposed jet serves as the primary mechanism through which it can influence the evolution of a high-impact weather event.
Performance of Adsorption - Based CO2 Acquisition Hardware for Mars ISRU
NASA Technical Reports Server (NTRS)
Finn, John E.; Mulloth, Lila M.; Borchers, Bruce A.; Luna, Bernadette (Technical Monitor)
2000-01-01
Chemical processing of the dusty, low-pressure Martian atmosphere typically requires conditioning and compression of the gases as first steps. A temperature-swing adsorption process can perform these tasks using nearly solid-state hardware and with relatively low power consumption compared to alternative processes. In addition, the process can separate the atmospheric constituents, producing both pressurized CO2 and a buffer gas mixture of nitrogen and argon. To date we have developed and tested adsorption compressors at scales appropriate for the near-term robotic missions that will lead the way to ISRU-based human exploration missions. In this talk we describe the characteristics, testing, and performance of these devices. We also discuss scale-up issues associated with meeting the processing demands of sample return and human missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biagi, C J; Uman, M A
2011-12-13
There are relatively few reports in the literature focusing on negative laboratory leaders. Most of the reports focus exclusively on the simpler positive laboratory leader that is more commonly encountered in high voltage engineering [Gorin et al., 1976; Les Renardieres Group, 1977; Gallimberti, 1979; Domens et al., 1994; Bazelyan and Raizer 1998]. The physics of the long, negative leader and its positive counterpart are similar; the two differ primarily in their extension mechanisms [Bazelyan and Raizer, 1998]. Long negative sparks extend primarily by an intermittent process termed a 'step' that requires the development of secondary leader channels separated in spacemore » from the primary leader channel. Long positive sparks typically extend continuously, although, under proper conditions, their extension can be temporarily halted and begun again, and this is sometimes viewed as a stepping process. However, it is emphasized that the nature of positive leader stepping is not like that of negative leader stepping. There are several key observational studies of the propagation of long, negative-polarity laboratory sparks in air that have aided in the understanding of the stepping mechanisms exhibited by such sparks [e.g., Gorin et al., 1976; Les Renardieres Group, 1981; Ortega et al., 1994; Reess et al., 1995; Bazelyan and Raizer, 1998; Gallimberti et al., 2002]. These reports are reviewed below in Section 2, with emphasis placed on the stepping mechanism (the space stem, pilot, and space leader). Then, in Section 3, reports pertaining to modeling of long negative leaders are summarized.« less
Milojevich, H; Lukowski, A
2016-01-01
Whereas research has indicated that children with Down syndrome (DS) imitate demonstrated actions over short delays, it is presently unknown whether children with DS recall information over lengthy delays at levels comparable with typically developing (TD) children matched on developmental age. In the present research, 10 children with DS and 10 TD children participated in a two-session study to examine basic processes associated with hippocampus-dependent recall memory. At the first session, the researcher demonstrated how to complete a three-step action sequence with novel stimuli; immediate imitation was permitted as an index of encoding. At the second session, recall memory was assessed for previously modelled sequences; children were also presented with two novel three-step control sequences. The results indicated that group differences were not apparent in the encoding of the events or the forgetting of information over time. Group differences were also not observed when considering the recall of individual target actions at the 1-month delay, although TD children produced more target actions overall at the second session relative to children with DS. Group differences were found when considering memory for temporal order information, such that TD children evidenced recall relative to novel control sequences, whereas children with DS did not. These findings suggest that children with DS may have difficulty with mnemonic processes associated with consolidation/storage and/or retrieval processes relative to TD children. © 2015 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Peterson, Candida C.; Wellman, Henry M.; Slaughter, Virginia
2013-01-01
Children aged 3 to 12 years (n=184) with typical development, deafness, autism or Asperger Syndrome took a series of theory-of-mind (ToM) tasks to confirm and extend previous developmental scaling evidence. A new sarcasm task, in the format of Wellman and Liu’s (2004) 5-step ToM scale, added a statistically reliable sixth step to the scale for all diagnostic groups. A key previous finding, divergence in task sequencing for children with autism, was confirmed. Comparisons among diagnostic groups, controlling age and language ability, showed that typical developers mastered the six ToM steps ahead of each of the three disabled groups, with implications for ToM theories. The final (sarcasm) task challenged even nondisabled 9-year-olds, demonstrating the new scale’s sensitivity to post-preschool ToM growth. PMID:22304467
One-Step Solvent Evaporation-Assisted 3D Printing of Piezoelectric PVDF Nanocomposite Structures.
Bodkhe, Sampada; Turcot, Gabrielle; Gosselin, Frederick P; Therriault, Daniel
2017-06-21
Development of a 3D printable material system possessing inherent piezoelectric properties to fabricate integrable sensors in a single-step printing process without poling is of importance to the creation of a wide variety of smart structures. Here, we study the effect of addition of barium titanate nanoparticles in nucleating piezoelectric β-polymorph in 3D printable polyvinylidene fluoride (PVDF) and fabrication of the layer-by-layer and self-supporting piezoelectric structures on a micro- to millimeter scale by solvent evaporation-assisted 3D printing at room temperature. The nanocomposite formulation obtained after a comprehensive investigation of composition and processing techniques possesses a piezoelectric coefficient, d 31 , of 18 pC N -1 , which is comparable to that of typical poled and stretched commercial PVDF film sensors. A 3D contact sensor that generates up to 4 V upon gentle finger taps demonstrates the efficacy of the fabrication technique. Our one-step 3D printing of piezoelectric nanocomposites can form ready-to-use, complex-shaped, flexible, and lightweight piezoelectric devices. When combined with other 3D printable materials, they could serve as stand-alone or embedded sensors in aerospace, biomedicine, and robotic applications.
Detailed study of scratch drive actuator characteristics using high-speed imaging
NASA Astrophysics Data System (ADS)
Li, Lijie; Brown, James G.; Uttamchandani, Deepak G.
2001-10-01
Microactuators are one of the key components in MEMS and Microsystems technology, and various designs have been realized through different fabrication processes. One type of microactuator commonly used is the scratch drive actuator (SDA) that is frequently fabricated by surface micromachining processes. An experimental investigation has been conducted on the characteristics of SDAs fabricated using the Cronos Microsystems MUMPs process. The motivation is to compare the response of SDAs located on the same die, and SDAs located on the different dies from the same fabrication batch. A high-speed imaging camera has been used to precisely determine important SDA characteristics such as step size, velocity, maximum velocity, and acceleration over long travel distance. These measurements are important from a repeatability point of view, and in order to fully exploit the potential of the SDA as a precise positioning mechanism. 2- and 3-stage SDAs have been designed and fabricated for these experiments. Typical step sizes varying from 7 nm at a driving voltage of 60 V to 23 nm at 290 V have been obtained.
NASA Astrophysics Data System (ADS)
Ngo, Chi-Vinh; Chun, Doo-Man
2017-07-01
Recently, the fabrication of superhydrophobic metallic surfaces by means of pulsed laser texturing has been developed. After laser texturing, samples are typically chemically coated or aged in ambient air for a relatively long time of several weeks to achieve superhydrophobicity. To accelerate the wettability transition from hydrophilicity to superhydrophobicity without the use of additional chemical treatment, a simple annealing post process has been developed. In the present work, grid patterns were first fabricated on stainless steel by a nanosecond pulsed laser, then an additional low-temperature annealing post process at 100 °C was applied. The effect of 100-500 μm step size of the textured grid upon the wettability transition time was also investigated. The proposed post process reduced the transition time from a couple of months to within several hours. All samples showed superhydrophobicity with contact angles greater than 160° and sliding angles smaller than 10° except samples with 500 μm step size, and could be applied in several potential applications such as self-cleaning and control of water adhesion.
A new active solder for joining electronic components
DOE Office of Scientific and Technical Information (OSTI.GOV)
SMITH,RONALD W.; VIANCO,PAUL T.; HERNANDEZ,CYNTHIA L.
Electronic components and micro-sensors utilize ceramic substrates, copper and aluminum interconnect and silicon. The joining of these combinations require pre-metallization such that solders with fluxes can wet such combinations of metals and ceramics. The paper will present a new solder alloy that can bond metals, ceramics and composites. The alloy directly wets and bonds in air without the use flux or premetallized layers. The paper will present typical processing steps and joint microstructures in copper, aluminum, aluminum oxide, aluminum nitride, and silicon joints.
Lightning flashes triggered in altitude by the rocket and wire technique
NASA Technical Reports Server (NTRS)
Laroche, P.; Bondiou, A.; Berard, A. Eybert; Barret, L.; Berlandis, J. P.; Terrier, G.; Jafferis, W.
1989-01-01
Electrical measurements were conducted in 1987 and 1988 on streamer and leader discharges occurring during the first stages of a triggered flash. This paper describes the pulsing phenomenon observed at positive leader onset (typical pulsing rate 25 microns), and it is shown that the same process happened in the case of the ignition of a flash triggered in altitude; with a wire several hundred meters long, positive leader propagates alone for several ms before the ignition of the downward negative stepped leader.
The attentive brain: insights from developmental cognitive neuroscience.
Amso, Dima; Scerif, Gaia
2015-10-01
Visual attention functions as a filter to select environmental information for learning and memory, making it the first step in the eventual cascade of thought and action systems. Here, we review studies of typical and atypical visual attention development and explain how they offer insights into the mechanisms of adult visual attention. We detail interactions between visual processing and visual attention, as well as the contribution of visual attention to memory. Finally, we discuss genetic mechanisms underlying attention disorders and how attention may be modified by training.
2016-06-08
forces. Plasmas in hypersonic and astrophysical flows are one of the most typical examples of such conductive fluids. Though MHD models are a low...remain powerful tools in helping researchers to understand the complex physical processes in the geospace environment. For example, the ideal MHD...vertex level within each physical time step. For this reason and the method’s DG ingredient, the method was named as the space-time discontinuous Galerkin
Applying Planning Algorithms to Argue in Cooperative Work
NASA Astrophysics Data System (ADS)
Monteserin, Ariel; Schiaffino, Silvia; Amandi, Analía
Negotiation is typically utilized in cooperative work scenarios for solving conflicts. Anticipating possible arguments in this negotiation step represents a key factor since we can take decisions about our participation in the cooperation process. In this context, we present a novel application of planning algorithms for argument generation, where the actions of a plan represent the arguments that a person might use during the argumentation process. In this way, we can plan how to persuade the other participants in cooperative work for reaching an expected agreement in terms of our interests. This approach allows us to take advantages since we can test anticipated argumentative solutions in advance.
Reversible Silylene Insertion Reactions into Si-H and P-H σ-Bonds at Room Temperature.
Rodriguez, Ricardo; Contie, Yohan; Nougué, Raphael; Baceiredo, Antoine; Saffon-Merceron, Nathalie; Sotiropoulos, Jean-Marc; Kato, Tsuyoshi
2016-11-07
Phosphine-stabilized silylenes react with silanes and a phosphine by silylene insertion into E-H σ-bonds (E=Si,P) at room temperature to give the corresponding silanes. Of special interest, the process occurs reversibly at room temperature. These results demonstrate that both the oxidative addition (typical reaction for transient silylenes) and the reductive elimination processes can proceed at the silicon center under mild reaction conditions. DFT calculations provide insight into the importance of the coordination of the silicon center to achieve the reductive elimination step. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Peabody, Hume L.
2017-01-01
This presentation is meant to be an overview of the model building process It is based on typical techniques (Monte Carlo Ray Tracing for radiation exchange, Lumped Parameter, Finite Difference for thermal solution) used by the aerospace industry This is not intended to be a "How to Use ThermalDesktop" course. It is intended to be a "How to Build Thermal Models" course and the techniques will be demonstrated using the capabilities of ThermalDesktop (TD). Other codes may or may not have similar capabilities. The General Model Building Process can be broken into four top level steps: 1. Build Model; 2. Check Model; 3. Execute Model; 4. Verify Results.
Growth and Characterization of Pyrite Thin Films for Photovoltaic Applications
NASA Astrophysics Data System (ADS)
Wertheim, Alex
A series of pyrite thin films were synthesized using a novel sequential evaporation technique to study the effects of substrate temperature on deposition rate and micro-structure of the deposited material. Pyrite was deposited in a monolayer-by-monolayer fashion using sequential evaporation of Fe under high vacuum, followed by sulfidation at high S pressures (typically > 1 mTorr to 1 Torr). Thin films were synthesized using two different growth processes; a one-step process in which a constant growth temperature is maintained throughout growth, and a three-step process in which an initial low temperature seed layer is deposited, followed by a high temperature layer, and then finished with a low temperature capping layer. Analysis methods to analyze the properties of the films included Glancing Angle X-Ray Diffraction (GAXRD), Rutherford Back-scattering Spectroscopy (RBS), Transmission Electron Microscopy (TEM), Secondary Ion Mass Spectroscopy (SIMS), 2-point IV measurements, and Hall effect measurements. Our results show that crystallinity of the pyrite thin film improves and grain size increases with increasing substrate temperature. The sticking coefficient of Fe was found to increase with increasing growth temperature, indicating that the Fe incorporation into the growing film is a thermally activated process.
Influence of phase inversion on the formation and stability of one-step multiple emulsions.
Morais, Jacqueline M; Rocha-Filho, Pedro A; Burgess, Diane J
2009-07-21
A novel method of preparation of water-in-oil-in-micelle-containing water (W/O/W(m)) multiple emulsions using the one-step emulsification method is reported. These multiple emulsions were normal (not temporary) and stable over a 60 day test period. Previously, reported multiple emulsion by the one-step method were abnormal systems that formed at the inversion point of simple emulsion (where there is an incompatibility in the Ostwald and Bancroft theories, and typically these are O/W/O systems). Pseudoternary phase diagrams and bidimensional process-composition (phase inversion) maps were constructed to assist in process and composition optimization. The surfactants used were PEG40 hydrogenated castor oil and sorbitan oleate, and mineral and vegetables oils were investigated. Physicochemical characterization studies showed experimentally, for the first time, the significance of the ultralow surface tension point on multiple emulsion formation by one-step via phase inversion processes. Although the significance of ultralow surface tension has been speculated previously, to the best of our knowledge, this is the first experimental confirmation. The multiple emulsion system reported here was dependent not only upon the emulsification temperature, but also upon the component ratios, therefore both the emulsion phase inversion and the phase inversion temperature were considered to fully explain their formation. Accordingly, it is hypothesized that the formation of these normal multiple emulsions is not a result of a temporary incompatibility (at the inversion point) during simple emulsion preparation, as previously reported. Rather, these normal W/O/W(m) emulsions are a result of the simultaneous occurrence of catastrophic and transitional phase inversion processes. The formation of the primary emulsions (W/O) is in accordance with the Ostwald theory ,and the formation of the multiple emulsions (W/O/W(m)) is in agreement with the Bancroft theory.
Recombinant Passenger Proteins Can Be Conveniently Purified by One-Step Affinity Chromatography.
Wang, Hua-zhen; Chu, Zhi-zhan; Chen, Chang-chao; Cao, Ao-cheng; Tong, Xin; Ouyang, Can-bin; Yuan, Qi-hang; Wang, Mi-nan; Wu, Zhong-kun; Wang, Hai-hong; Wang, Sheng-bin
2015-01-01
Fusion tag is one of the best available tools to date for enhancement of the solubility or improvement of the expression level of recombinant proteins in Escherichia coli. Typically, two consecutive affinity purification steps are often necessitated for the purification of passenger proteins. As a fusion tag, acyl carrier protein (ACP) could greatly increase the soluble expression level of Glucokinase (GlcK), α-Amylase (Amy) and GFP. When fusion protein ACP-G2-GlcK-Histag and ACP-G2-Amy-Histag, in which a protease TEV recognition site was inserted between the fusion tag and passenger protein, were coexpressed with protease TEV respectively in E. coli, the efficient intracellular processing of fusion proteins was achieved. The resulting passenger protein GlcK-Histag and Amy-Histag accumulated predominantly in a soluble form, and could be conveniently purified by one-step Ni-chelating chromatography. However, the fusion protein ACP-GFP-Histag was processed incompletely by the protease TEV coexpressed in vivo, and a large portion of the resulting target protein GFP-Histag aggregated in insoluble form, indicating that the intracellular processing may affect the solubility of cleaved passenger protein. In this context, the soluble fusion protein ACP-GFP-Histag, contained in the supernatant of E. coli cell lysate, was directly subjected to cleavage in vitro by mixing it with the clarified cell lysate of E. coli overexpressing protease TEV. Consequently, the resulting target protein GFP-Histag could accumulate predominantly in a soluble form, and be purified conveniently by one-step Ni-chelating chromatography. The approaches presented here greatly simplify the purification process of passenger proteins, and eliminate the use of large amounts of pure site-specific proteases.
Recombinant Passenger Proteins Can Be Conveniently Purified by One-Step Affinity Chromatography
Wang, Hua-zhen; Chu, Zhi-zhan; Chen, Chang-chao; Cao, Ao-cheng; Tong, Xin; Ouyang, Can-bin; Yuan, Qi-hang; Wang, Mi-nan; Wu, Zhong-kun; Wang, Hai-hong; Wang, Sheng-bin
2015-01-01
Fusion tag is one of the best available tools to date for enhancement of the solubility or improvement of the expression level of recombinant proteins in Escherichia coli. Typically, two consecutive affinity purification steps are often necessitated for the purification of passenger proteins. As a fusion tag, acyl carrier protein (ACP) could greatly increase the soluble expression level of Glucokinase (GlcK), α-Amylase (Amy) and GFP. When fusion protein ACP-G2-GlcK-Histag and ACP-G2-Amy-Histag, in which a protease TEV recognition site was inserted between the fusion tag and passenger protein, were coexpressed with protease TEV respectively in E. coli, the efficient intracellular processing of fusion proteins was achieved. The resulting passenger protein GlcK-Histag and Amy-Histag accumulated predominantly in a soluble form, and could be conveniently purified by one-step Ni-chelating chromatography. However, the fusion protein ACP-GFP-Histag was processed incompletely by the protease TEV coexpressed in vivo, and a large portion of the resulting target protein GFP-Histag aggregated in insoluble form, indicating that the intracellular processing may affect the solubility of cleaved passenger protein. In this context, the soluble fusion protein ACP-GFP-Histag, contained in the supernatant of E. coli cell lysate, was directly subjected to cleavage in vitro by mixing it with the clarified cell lysate of E. coli overexpressing protease TEV. Consequently, the resulting target protein GFP-Histag could accumulate predominantly in a soluble form, and be purified conveniently by one-step Ni-chelating chromatography. The approaches presented here greatly simplify the purification process of passenger proteins, and eliminate the use of large amounts of pure site-specific proteases. PMID:26641240
ERIC Educational Resources Information Center
Peterson, Candida C.; Wellman, Henry M.; Slaughter, Virginia
2012-01-01
Children aged 3-2 years (n = 184) with typical development, deafness, autism, or Asperger syndrome took a series of theory-of-mind (ToM) tasks to confirm and extend previous developmental scaling evidence. A new sarcasm task, in the format of H. M. Wellman and D. Liu's (2004) 5-step ToM Scale, added a statistically reliable 6th step to the scale…
Applying Machine Learning to Star Cluster Classification
NASA Astrophysics Data System (ADS)
Fedorenko, Kristina; Grasha, Kathryn; Calzetti, Daniela; Mahadevan, Sridhar
2016-01-01
Catalogs describing populations of star clusters are essential in investigating a range of important issues, from star formation to galaxy evolution. Star cluster catalogs are typically created in a two-step process: in the first step, a catalog of sources is automatically produced; in the second step, each of the extracted sources is visually inspected by 3-to-5 human classifiers and assigned a category. Classification by humans is labor-intensive and time consuming, thus it creates a bottleneck, and substantially slows down progress in star cluster research.We seek to automate the process of labeling star clusters (the second step) through applying supervised machine learning techniques. This will provide a fast, objective, and reproducible classification. Our data is HST (WFC3 and ACS) images of galaxies in the distance range of 3.5-12 Mpc, with a few thousand star clusters already classified by humans as a part of the LEGUS (Legacy ExtraGalactic UV Survey) project. The classification is based on 4 labels (Class 1 - symmetric, compact cluster; Class 2 - concentrated object with some degree of asymmetry; Class 3 - multiple peak system, diffuse; and Class 4 - spurious detection). We start by looking at basic machine learning methods such as decision trees. We then proceed to evaluate performance of more advanced techniques, focusing on convolutional neural networks and other Deep Learning methods. We analyze the results, and suggest several directions for further improvement.
NASA Astrophysics Data System (ADS)
Haag, Justin M.; Van Gorp, Byron E.; Mouroulis, Pantazis; Thompson, David R.
2017-09-01
The airborne Portable Remote Imaging Spectrometer (PRISM) instrument is based on a fast (F/1.8) Dyson spectrometer operating at 350-1050 nm and a two-mirror telescope combined with a Teledyne HyViSI 6604A detector array. Raw PRISM data contain electronic and optical artifacts that must be removed prior to radiometric calibration. We provide an overview of the process transforming raw digital numbers to calibrated radiance values. Electronic panel artifacts are first corrected using empirical relationships developed from laboratory data. The instrument spectral response functions (SRF) are reconstructed using a measurement-based optimization technique. Removal of SRF effects from the data improves retrieval of true spectra, particularly in the typically low-signal near-ultraviolet and near-infrared regions. As a final step, radiometric calibration is performed using corrected measurements of an object of known radiance. Implementation of the complete calibration procedure maximizes data quality in preparation for subsequent processing steps, such as atmospheric removal and spectral signature classification.
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1990-01-01
The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.
Faba, Laura; Díaz, Eva; Ordóñez, Salvador
2014-10-01
Integrating reaction steps is of key interest in the development of processes for transforming lignocellulosic materials into drop-in fuels. We propose a procedure for performing the aldol condensation (reaction between furfural and acetone is taken as model reaction) and the total hydrodeoxygenation of the resulting condensation adducts in one step, yielding n-alkanes. Different combinations of catalysts (bifunctional catalysts or mechanical mixtures), reaction conditions, and solvents (aqueous and organic) have been tested for performing these reactions in an isothermal batch reactor. The results suggest that the use of bifunctional catalysts and aqueous phase lead to an effective integration of both reactions. Therefore, selectivities to n-alkanes higher than 50% were obtained using this catalyst at typical hydrogenation conditions (T=493 K, P=4.5 MPa, 24 h reaction time). The use of organic solvent, carbonaceous supports, or mechanical mixtures of monofunctional catalysts leads to poorer results owing to side effects; mainly, hydrogenation of reactants and adsorption processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Effects of walking speed on the step-by-step control of step width.
Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C
2018-02-08
Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.
Texture Feature Extraction and Classification for Iris Diagnosis
NASA Astrophysics Data System (ADS)
Ma, Lin; Li, Naimin
Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.
NASA Astrophysics Data System (ADS)
Borah, Utpal; Aashranth, B.; Samantaray, Dipti; Kumar, Santosh; Davinci, M. Arvinth; Albert, Shaju K.; Bhaduri, A. K.
2017-10-01
Work hardening, dynamic recovery and dynamic recrystallization (DRX) occurring during hot working of austenitic steel have been extensively studied. Various empirical models describe the nature and effects of these phenomena in a typical framework. However, the typical model is sometimes violated following atypical transitions in deformation mechanisms of the material. To ascertain the nature of these atypical transitions, researchers have intentionally introduced discontinuities in the deformation process, such as interrupting the deformation as in multi-step rolling and abruptly changing the rate of deformation. In this work, we demonstrate that atypical transitions are possible even in conventional single-step, constant strain rate deformation of austenitic steel. Towards this aim, isothermal, constant true strain rate deformation of austenitic steel has been carried out in a temperature range of 1173-1473 K and strain rate range of 0.01-100 s-1. The microstructural response corresponding to each deformation condition is thoroughly investigated. The conventional power-law variation of deformation grain size (D) with peak stress (σp) during DRX is taken as a typical model and experimental data is tested against it. It is shown that σp-D relations exhibit an atypical two-slope linear behaviour rather than a continuous power law relation. Similarly, the reduction in σp with temperature (T) is found to consist of two discrete linear segments. In practical terms, the two linear segments denote two distinct microstructural responses to deformation. As a consequence of this distinction, the typical model breaks down and is unable to completely relate microstructural evolution to flow behaviour. The present work highlights the microstructural mechanisms responsible for this atypical behavior and suggests strategies to incorporate the two-slope behaviour in the DRX model.
Metallic superhydrophobic surfaces via thermal sensitization
NASA Astrophysics Data System (ADS)
Vahabi, Hamed; Wang, Wei; Popat, Ketul C.; Kwon, Gibum; Holland, Troy B.; Kota, Arun K.
2017-06-01
Superhydrophobic surfaces (i.e., surfaces extremely repellent to water) allow water droplets to bead up and easily roll off from the surface. While a few methods have been developed to fabricate metallic superhydrophobic surfaces, these methods typically involve expensive equipment, environmental hazards, or multi-step processes. In this work, we developed a universal, scalable, solvent-free, one-step methodology based on thermal sensitization to create appropriate surface texture and fabricate metallic superhydrophobic surfaces. To demonstrate the feasibility of our methodology and elucidate the underlying mechanism, we fabricated superhydrophobic surfaces using ferritic (430) and austenitic (316) stainless steels (representative alloys) with roll off angles as low as 4° and 7°, respectively. We envision that our approach will enable the fabrication of superhydrophobic metal alloys for a wide range of civilian and military applications.
Designing divertor targets for uniform power load
NASA Astrophysics Data System (ADS)
Dekeyser, W.; Reiter, D.; Baelmans, M.
2015-08-01
Divertor design for next step fusion reactors heavily relies on 2D edge plasma modeling with codes as e.g. B2-EIRENE. While these codes are typically used in a design-by-analysis approach, in previous work we have shown that divertor design can alternatively be posed as a mathematical optimization problem, and solved very efficiently using adjoint methods adapted from computational aerodynamics. This approach has been applied successfully to divertor target shape design for more uniform power load. In this paper, the concept is further extended to include all contributions to the target power load, with particular focus on radiation. In a simplified test problem, we show the potential benefits of fully including the radiation load in the design cycle as compared to only assessing this load in a post-processing step.
Liu, Chia-Nan; Chen, Rong-Her; Chen, Kuo-Sheng
2006-02-01
The understanding of long-term landfill settlement is important for landfill design and rehabilitation. However, suitable models that can consider both the mechanical and biodecomposition mechanisms in predicting the long-term landfill settlement are generally not available. In this paper, a model based on unsaturated consolidation theory and considering the biodegradation process is introduced to simulate the landfill settlement behaviour. The details of problem formulations and the derivation of the solution for the formulated differential equation of gas pressure are presented. A step-by-step analytical procedure employing this approach for estimating settlement is proposed. The proposed model can generally model the typical features of short-term and long-term behaviour. The proposed model also yields results that are comparable with the field measurements.
Mayer, Gerhard; Kulbe, Klaus D; Nidetzky, Bernd
2002-01-01
The production of xylitol from D-glucose occurs through a three-step process in which D-arabitol and D-xylulose are formed as the first and second intermediate product, respectively, and both are obtained via microbial bioconversion reactions. Catalytic hydrogenation of D-xylulose yields xylitol; however, it is contaminated with D-arabitol. The aim of this study was to increase the stereoselectivity of the D-xylulose reduction step by using enzymatic catalysis. Recombinant xylitol dehydrogenase from the yeast Galactocandida mastotermitis was employed to catalyze xylitol formation from D-xylulose in an NADH-dependent reaction, and coenzyme regeneration was achieved by means of formate dehydrogenase-catalyzed oxidation of formate into carbon dioxide. The xylitol yield from D-xylulose was close to 100%. Optimal productivity was found for initial coenzyme concentrations of between 0.5 and 0.75 mM. In the presence of 0.30 M (45 g/L) D-xylulose and 2000 U/L of both dehydrogenases, exhaustive substrate turnover was achieved typically in a 4-h reaction time. The enzymes were recovered after the reaction in yields of approx 90% by means of ultrafiltration and could be reused for up to six cycles of D-xylulose reduction. The advantages of incorporating the enzyme-catalyzed step in a process for producing xylitol from D-glucose are discussed, and strategies for downstream processing are proposed by which the observed coenzyme turnover number of approx 600 could be increased significantly.
Consequences of atomic layer etching on wafer scale uniformity in inductively coupled plasmas
NASA Astrophysics Data System (ADS)
Huard, Chad M.; Lanham, Steven J.; Kushner, Mark J.
2018-04-01
Atomic layer etching (ALE) typically divides the etching process into two self-limited reactions. One reaction passivates a single layer of material while the second preferentially removes the passivated layer. As such, under ideal conditions the wafer scale uniformity of ALE should be independent of the uniformity of the reactant fluxes onto the wafers, provided all surface reactions are saturated. The passivation and etch steps should individually asymptotically saturate after a characteristic fluence of reactants has been delivered to each site. In this paper, results from a computational investigation are discussed regarding the uniformity of ALE of Si in Cl2 containing inductively coupled plasmas when the reactant fluxes are both non-uniform and non-ideal. In the parameter space investigated for inductively coupled plasmas, the local etch rate for continuous processing was proportional to the ion flux. When operated with saturated conditions (that is, both ALE steps are allowed to self-terminate), the ALE process is less sensitive to non-uniformities in the incoming ion flux than continuous etching. Operating ALE in a sub-saturation regime resulted in less uniform etching. It was also found that ALE processing with saturated steps requires a larger total ion fluence than continuous etching to achieve the same etch depth. This condition may result in increased resist erosion and/or damage to stopping layers using ALE. While these results demonstrate that ALE provides increased etch depth uniformity, they do not show an improved critical dimension uniformity in all cases. These possible limitations to ALE processing, as well as increased processing time, will be part of the process optimization that includes the benefits of atomic resolution and improved uniformity.
One Step at a Time: Using Task Analyses to Teach Skills
ERIC Educational Resources Information Center
Snodgrass, Melinda R.; Meadan, Hedda; Ostrosky, Michaelene M.; Cheung, W. Catherine
2017-01-01
Task analyses are useful when teaching children how to complete tasks by breaking the tasks into small steps, particularly when children struggle to learn a skill during typical classroom instruction. We describe how to create a task analysis by identifying the steps a child needs to independently perform the task, how to assess what steps a child…
Orbital construction support equipment
NASA Technical Reports Server (NTRS)
1977-01-01
Approximately 200 separate construction steps were defined for the three solar power satellite (SPS) concepts. Detailed construction scenarios were developed which describe the specific tasks to be accomplished, and identify general equipment requirements. The scenarios were used to perform a functional analysis, which resulted in the definition of 100 distinct SPS elements. These elements are the components, parts, subsystems, or assemblies upon which construction activities take place. The major SPS elements for each configuration are shown. For those elements, 300 functional requirements were identified in seven generic processes. Cumulatively, these processes encompass all functions required during SPS construction/assembly. Individually each process is defined such that it includes a specific type of activity. Each SPS element may involve activities relating to any or all of the generic processes. The processes are listed, and examples of the requirements defined for a typical element are given.
High resolution hybrid optical and acoustic sea floor maps (Invited)
NASA Astrophysics Data System (ADS)
Roman, C.; Inglis, G.
2013-12-01
This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.
High Yield Chemical Vapor Deposition Growth of High Quality Large-Area AB Stacked Bilayer Graphene
Liu, Lixin; Zhou, Hailong; Cheng, Rui; Yu, Woo Jong; Liu, Yuan; Chen, Yu; Shaw, Jonathan; Zhong, Xing; Huang, Yu; Duan, Xiangfeng
2012-01-01
Bernal stacked (AB stacked) bilayer graphene is of significant interest for functional electronic and photonic devices due to the feasibility to continuously tune its band gap with a vertical electrical field. Mechanical exfoliation can be used to produce AB stacked bilayer graphene flakes but typically with the sizes limited to a few micrometers. Chemical vapor deposition (CVD) has been recently explored for the synthesis of bilayer graphene but usually with limited coverage and a mixture of AB and randomly stacked structures. Herein we report a rational approach to produce large-area high quality AB stacked bilayer graphene. We show that the self-limiting effect of graphene growth on Cu foil can be broken by using a high H2/CH4 ratio in a low pressure CVD process to enable the continued growth of bilayer graphene. A high temperature and low pressure nucleation step is found to be critical for the formation of bilayer graphene nuclei with high AB stacking ratio. A rational design of a two-step CVD process is developed for the growth of bilayer graphene with high AB stacking ratio (up to 90 %) and high coverage (up to 99 %). The electrical transport studies demonstrated that devices made of the as-grown bilayer graphene exhibit typical characteristics of AB stacked bilayer graphene with the highest carrier mobility exceeding 4,000 cm2/V·s at room temperature, comparable to that of the exfoliated bilayer graphene. PMID:22906199
Cain, Jeffrey D; Shi, Fengyuan; Wu, Jinsong; Dravid, Vinayak P
2016-05-24
Due to their unique optoelectronic properties and potential for next generation devices, monolayer transition metal dichalcogenides (TMDs) have attracted a great deal of interest since the first observation of monolayer MoS2 a few years ago. While initially isolated in monolayer form by mechanical exfoliation, the field has evolved to more sophisticated methods capable of direct growth of large-area monolayer TMDs. Chemical vapor deposition (CVD) is the technique used most prominently throughout the literature and is based on the sulfurization of transition metal oxide precursors. CVD-grown monolayers exhibit excellent quality, and this process is widely used in studies ranging from the fundamental to the applied. However, little is known about the specifics of the nucleation and growth mechanisms occurring during the CVD process. In this study, we have investigated the nucleation centers or "seeds" from which monolayer TMDs typically grow. This was accomplished using aberration-corrected scanning transmission electron microscopy to analyze the structure and composition of the nuclei present in CVD-grown MoS2-MoSe2 alloys. We find that monolayer growth proceeds from nominally oxi-chalcogenide nanoparticles which act as heterogeneous nucleation sites for monolayer growth. The oxi-chalcogenide nanoparticles are typically encased in a fullerene-like shell made of the TMD. Using this information, we propose a step-by-step nucleation and growth mechanism for monolayer TMDs. Understanding this mechanism may pave the way for precise control over the synthesis of 2D materials, heterostructures, and related complexes.
MollDE: a homology modeling framework you can click with.
Canutescu, Adrian A; Dunbrack, Roland L
2005-06-15
Molecular Integrated Development Environment (MolIDE) is an integrated application designed to provide homology modeling tools and protocols under a uniform, user-friendly graphical interface. Its main purpose is to combine the most frequent modeling steps in a semi-automatic, interactive way, guiding the user from the target protein sequence to the final three-dimensional protein structure. The typical basic homology modeling process is composed of building sequence profiles of the target sequence family, secondary structure prediction, sequence alignment with PDB structures, assisted alignment editing, side-chain prediction and loop building. All of these steps are available through a graphical user interface. MolIDE's user-friendly and streamlined interactive modeling protocol allows the user to focus on the important modeling questions, hiding from the user the raw data generation and conversion steps. MolIDE was designed from the ground up as an open-source, cross-platform, extensible framework. This allows developers to integrate additional third-party programs to MolIDE. http://dunbrack.fccc.edu/molide/molide.php rl_dunbrack@fccc.edu.
NASA Astrophysics Data System (ADS)
Neher, Peter F.; Stieltjes, Bram; Reisert, Marco; Reicht, Ignaz; Meinzer, Hans-Peter; Fritzsche, Klaus H.
2012-02-01
Fiber tracking algorithms yield valuable information for neurosurgery as well as automated diagnostic approaches. However, they have not yet arrived in the daily clinical practice. In this paper we present an open source integration of the global tractography algorithm proposed by Reisert et.al.1 into the open source Medical Imaging Interaction Toolkit (MITK) developed and maintained by the Division of Medical and Biological Informatics at the German Cancer Research Center (DKFZ). The integration of this algorithm into a standardized and open development environment like MITK enriches accessibility of tractography algorithms for the science community and is an important step towards bringing neuronal tractography closer to a clinical application. The MITK diffusion imaging application, downloadable from www.mitk.org, combines all the steps necessary for a successful tractography: preprocessing, reconstruction of the images, the actual tracking, live monitoring of intermediate results, postprocessing and visualization of the final tracking results. This paper presents typical tracking results and demonstrates the steps for pre- and post-processing of the images.
Laser etching of polymer masked leadframes
NASA Astrophysics Data System (ADS)
Ho, C. K.; Man, H. C.; Yue, T. M.; Yuen, C. W.
1997-02-01
A typical electroplating production line for the deposition of silver pattern on copper leadframes in the semiconductor industry involves twenty to twenty five steps of cleaning, pickling, plating, stripping etc. This complex production process occupies large floor space and has also a number of problems such as difficulty in the production of rubber masks and alignment, generation of toxic fumes, high cost of water consumption and sometimes uncertainty on the cleanliness of the surfaces to be plated. A novel laser patterning process is proposed in this paper which can replace many steps in the existing electroplating line. The proposed process involves the application of high speed laser etching techniques on leadframes which were protected with polymer coating. The desired pattern for silver electroplating is produced by laser ablation of the polymer coating. Excimer laser was found to be most effective for this process as it can expose a pattern of clean copper substrate which can be silver plated successfully. Previous working of Nd:YAG laser ablation showed that 1.06 μm radiation was not suitable for this etching process because a thin organic and transparent film remained on the laser etched region. The effect of excimer pulse frequency and energy density upon the removal rate of the polymer coating was studied.
Cinematic modeling of local morphostructures evolution
NASA Astrophysics Data System (ADS)
Bronguleev, Vadim
2013-04-01
With the use of a simple 3-dimensional cinematic model of slope development some characteristic features of morphostructure evolution were shown. We assume that the velocity of slope degradation along normal vector to a surface is determined by three morphological parameters: slope angle, its profile curvature and its plan curvature. This leads to the equation of parabolic type: where h=h(x,y,t) is the altitude of slope surface, Kpr(x,y,t)is the profile curvature of the slope, Kpl(x,y,t) is the plan curvature, f(x,y,t) is the velocity of tectonic deformation (or base level movement), A, B, and C are the coefficients which may depend on coordinates and time. The first term in the right part of the equation describes parallel slope retreat, typical to arid environment, the second term describes slope vertical grading due to viscous flow, typical to humid conditions, and the third term is responsible for slope plan grading due to such processes as desquamation, frost weathering, etc. This simple model describes a wide range of local morphostructures evolution: stepped slopes and piedmont benchlands, lithogenic forms - terraces and passages, flattened summits and rounded hills. Using different types of the function f (block rise, swell, tilt), we obtained interesting reformations of initial tectonic landforms during the concurrent action of denudation processes. The result of such action differs from that of the successive action of tectonic movements and denudation. The relation of rates of the endogenous and exogenous processes strongly affects the formation of local morphostructures. Preservation of initial features of slope such as steps or bends as well as their formation due to tectonics or lithology is possible if coefficients B and Care small in comparison toA.
Peterson, Candida C; Wellman, Henry M; Slaughter, Virginia
2012-01-01
Children aged 3-12 years (n = 184) with typical development, deafness, autism, or Asperger syndrome took a series of theory-of-mind (ToM) tasks to confirm and extend previous developmental scaling evidence. A new sarcasm task, in the format of H. M. Wellman and D. Liu's (2004) 5-step ToM Scale, added a statistically reliable 6th step to the scale for all diagnostic groups. A key previous finding, divergence in task sequencing for children with autism, was confirmed. Comparisons among diagnostic groups, controlling age, and language ability, showed that typical developers mastered the 6 ToM steps ahead of each of the 3 disabled groups, with implications for ToM theories. The final (sarcasm) task challenged even nondisabled 9-year-olds, demonstrating the new scale's sensitivity to post-preschool ToM growth. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc.
NASA Astrophysics Data System (ADS)
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).
Palermo, Romina; O’Connor, Kirsty B.; Davis, Joshua M.; Irons, Jessica; McKone, Elinor
2013-01-01
Although good tests are available for diagnosing clinical impairments in face expression processing, there is a lack of strong tests for assessing “individual differences” – that is, differences in ability between individuals within the typical, nonclinical, range. Here, we develop two new tests, one for expression perception (an odd-man-out matching task in which participants select which one of three faces displays a different expression) and one additionally requiring explicit identification of the emotion (a labelling task in which participants select one of six verbal labels). We demonstrate validity (careful check of individual items, large inversion effects, independence from nonverbal IQ, convergent validity with a previous labelling task), reliability (Cronbach’s alphas of.77 and.76 respectively), and wide individual differences across the typical population. We then demonstrate the usefulness of the tests by addressing theoretical questions regarding the structure of face processing, specifically the extent to which the following processes are common or distinct: (a) perceptual matching and explicit labelling of expression (modest correlation between matching and labelling supported partial independence); (b) judgement of expressions from faces and voices (results argued labelling tasks tap into a multi-modal system, while matching tasks tap distinct perceptual processes); and (c) expression and identity processing (results argued for a common first step of perceptual processing for expression and identity). PMID:23840821
Palermo, Romina; O'Connor, Kirsty B; Davis, Joshua M; Irons, Jessica; McKone, Elinor
2013-01-01
Although good tests are available for diagnosing clinical impairments in face expression processing, there is a lack of strong tests for assessing "individual differences"--that is, differences in ability between individuals within the typical, nonclinical, range. Here, we develop two new tests, one for expression perception (an odd-man-out matching task in which participants select which one of three faces displays a different expression) and one additionally requiring explicit identification of the emotion (a labelling task in which participants select one of six verbal labels). We demonstrate validity (careful check of individual items, large inversion effects, independence from nonverbal IQ, convergent validity with a previous labelling task), reliability (Cronbach's alphas of.77 and.76 respectively), and wide individual differences across the typical population. We then demonstrate the usefulness of the tests by addressing theoretical questions regarding the structure of face processing, specifically the extent to which the following processes are common or distinct: (a) perceptual matching and explicit labelling of expression (modest correlation between matching and labelling supported partial independence); (b) judgement of expressions from faces and voices (results argued labelling tasks tap into a multi-modal system, while matching tasks tap distinct perceptual processes); and (c) expression and identity processing (results argued for a common first step of perceptual processing for expression and identity).
Engel, E; Nicklaus, S; Septier, C; Salles, C; Le Quéré, J L
2001-06-01
The objective of this study was to characterize the effect of ripening on the taste of a typically bitter Camembert cheese. The first step was to select a typically bitter cheese among several products obtained by different processes supposed to enhance this taste defect. Second, the evolution of cheese taste during ripening was characterized from a sensory point of view. Finally, the relative impact of fat, proteins, and water-soluble molecules on cheese taste was determined by using omission tests performed on a reconstituted cheese. These omission tests showed that cheese taste resulted mainly from the gustatory properties of water-soluble molecules but was modulated by a matrix effect due to fat, proteins, and cheese structure. The evolution of this matrix effect during ripening was discussed for each taste characteristic.
Program Risk Planning with Risk as a Resource
NASA Technical Reports Server (NTRS)
Ray, Paul S.
1998-01-01
The current focus of NASA on cost effective ways of achieving mission objectives has created a demand for a change in the risk management process of a program. At present, there is no guidelines as to when risk taking is justified due to high cost for a marginal improvement in risk. As a remedial step, Dr. Greenfield of NASA, developed a concept of risk management with risk as a resource. In the report, the following topics are addressed: (1) the risk management approach; (2) planning risk and program life cycle; (3) key components of a typical program; (4) the risk trading methodology; (5) review and decision process; (6) merits of the proposed risk planning approach; and (7) recommendations.
Making High-Pass Filters For Submillimeter Waves
NASA Technical Reports Server (NTRS)
Siegel, Peter H.; Lichtenberger, John A.
1991-01-01
Micromachining-and-electroforming process makes rigid metal meshes with cells ranging in size from 0.002 in. to 0.05 in. square. Series of steps involving cutting, grinding, vapor deposition, and electroforming creates self-supporting, electrically thick mesh. Width of holes typically 1.2 times cutoff wavelength of dominant waveguide mode in hole. To obtain sharp frequency-cutoff characteristic, thickness of mesh made greater than one-half of guide wavelength of mode in hole. Meshes used as high-pass filters (dichroic plates) for submillimeter electromagnetic waves. Process not limited to square silicon wafers. Round wafers also used, with slightly more complication in grinding periphery. Grid in any pattern produced in electroforming mandrel. Any platable metal or alloy used for mesh.
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
Assay Development Process | Office of Cancer Clinical Proteomics Research
Typical steps involved in the development of a mass spectrometry-based targeted assay include: (1) selection of surrogate or signature peptides corresponding to the targeted protein or modification of interest; (2) iterative optimization of instrument and method parameters for optimal detection of the selected peptide; (3) method development for protein extraction from biological matrices such as tissue, whole cell lysates, or blood plasma/serum and proteolytic digestion of proteins (usually with trypsin); (4) evaluation of the assay in the intended biological matrix to determine if e
Liu, Richard Y; Bae, Minwoo; Buchwald, Stephen L
2018-02-07
Metal-catalyzed silylative dehydration of primary amides is an economical approach to the synthesis of nitriles. We report a copper-hydride(CuH)-catalyzed process that avoids a typically challenging 1,2-siloxane elimination step, thereby dramatically increasing the rate of the overall transformation relative to alternative metal-catalyzed systems. This new reaction proceeds at ambient temperature, tolerates a variety of metal-, acid-, or base-sensitive functional groups, and can be performed using a simple ligand, inexpensive siloxanes, and low catalyst loading.
NASA Astrophysics Data System (ADS)
Jenuwine, Natalia M.; Mahesh, Sunny N.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Early detection of lung nodules from CT scans is key to improving lung cancer treatment, but poses a significant challenge for radiologists due to the high throughput required of them. Computer-Aided Detection (CADe) systems aim to automatically detect these nodules with computer algorithms, thus improving diagnosis. These systems typically use a candidate selection step, which identifies all objects that resemble nodules, followed by a machine learning classifier which separates true nodules from false positives. We create a CADe system that uses a 3D convolutional neural network (CNN) to detect nodules in CT scans without a candidate selection step. Using data from the LIDC database, we train a 3D CNN to analyze subvolumes from anywhere within a CT scan and output the probability that each subvolume contains a nodule. Once trained, we apply our CNN to detect nodules from entire scans, by systematically dividing the scan into overlapping subvolumes which we input into the CNN to obtain the corresponding probabilities. By enabling our network to process an entire scan, we expect to streamline the detection process while maintaining its effectiveness. Our results imply that with continued training using an iterative training scheme, the one-step approach has the potential to be highly effective.
Melendez, Johan H.; Santaus, Tonya M.; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A.; Geddes, Chris D.
2016-01-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by the detection of the genomic target often involving PCR-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (GC) DNA. Our approach is based on the use of highly-focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the present study, we show that highly focused microwaves at 2.45 GHz, using 12.3 mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification in less than 10 minutes total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward towards the development of a point-of-care (POC) platform for detection of gonorrhea infections. PMID:27325503
Pinto, Nuno D S; Uplekar, Shaunak D; Moreira, Antonio R; Rao, Govind; Frey, Douglas D
2017-01-01
Purification processes for monoclonal Immunoglobulin G (IgG) typically employ protein A chromatography as a capture step to remove most of the impurities. One major concern of the post-protein A chromatography processes is the co-elution of some of the host cell proteins (HCPs) with IgG in the capture step. In this work, a novel method for IgG elution in protein A chromatography that reduces the co-elution of HCPs is presented where a two-step pH gradient is self-formed inside a protein A chromatography column. The complexities involved in using an internally produced pH gradient in a protein A chromatography column employing adsorbed buffering species are discussed though equation-based modeling. Under the conditions employed, ELISA assays show a 60% reduction in the HCPs co-eluting with the IgG fraction when using the method as compared to conventional protein A elution without affecting the IgG yield. Evidence is also obtained which indicates that the amount of leached protein A present in free solution in the purified product is reduced by the new method. Biotechnol. Bioeng. 2017;114: 154-162. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
Aerodynamic Impact of an Aft-Facing Slat-Step on High Re Airfoils
NASA Astrophysics Data System (ADS)
Kibble, Geoffrey; Petrin, Chris; Jacob, Jamey; Elbing, Brian; Ireland, Peter; Black, Buddy
2016-11-01
Typically, the initial aerodynamic design and subsequent testing and simulation of an aircraft wing assumes an ideal wing surface without imperfections. In reality, however the surface of an in-service aircraft wing rarely matches the surface characteristics of the test wings used during the conceptual design phase and certification process. This disconnect is usually deemed negligible or overlooked entirely. Specifically, many aircraft incorporate a leading edge slat; however, the mating between the slat and the top surface of the wing is not perfectly flush and creates a small aft-facing step behind the slat. In some cases, the slat can create a step as large as one millimeter tall, which is entirely submerged within the boundary layer. This abrupt change in geometry creates a span-wise vortex behind the step and in transonic flow causes a shock to form near the leading edge. This study investigates both experimentally and computationally the implications of an aft-facing slat-step on an aircraft wing and is compared to the ideal wing surface for subsonic and transonic flow conditions. The results of this study are useful for design of flow control modifications for aircraft currently in service and important for improving the next generation of aircraft wings.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
Willinger, Ulrike; Deckert, Matthias; Schmöger, Michaela; Schaunig-Busch, Ines; Formann, Anton K; Auff, Eduard
2017-12-01
Metaphor is a specific type of figurative language that is used in various important fields such as in the work with children in clinical or teaching contexts. The aim of the study was to investigate the developmental course, developmental steps, and possible cognitive predictors regarding metaphor processing in childhood and early adolescence. One hundred sixty-four typically developing children (7-year-olds, 9-year-olds) and early adolescents (11-year-olds) were tested for metaphor identification, comprehension, comprehension quality, and preference by the Metaphoric Triads Task as well as for analogical reasoning, information processing speed, cognitive flexibility under time pressure, and cognitive flexibility without time pressure. Metaphor identification and comprehension consecutively increased with age. Eleven-year-olds showed significantly higher metaphor comprehension quality and preference scores than seven- and nine-year-olds, whilst these younger age groups did not differ. Age, cognitive flexibility under time pressure, information processing speed, analogical reasoning, and cognitive flexibility without time pressure significantly predicted metaphor comprehension. Metaphorical language ability shows an ongoing development and seemingly changes qualitatively at the beginning of early adolescence. These results can possibly be explained by a greater synaptic reorganization in early adolescents. Furthermore, cognitive flexibility under time pressure and information processing speed possibly facilitate the ability to adapt metaphor processing strategies in a flexible, quick, and appropriate way.
Machine learning for fab automated diagnostics
NASA Astrophysics Data System (ADS)
Giollo, Manuel; Lam, Auguste; Gkorou, Dimitra; Liu, Xing Lan; van Haren, Richard
2017-06-01
Process optimization depends largely on field engineer's knowledge and expertise. However, this practice turns out to be less sustainable due to the fab complexity which is continuously increasing in order to support the extreme miniaturization of Integrated Circuits. On the one hand, process optimization and root cause analysis of tools is necessary for a smooth fab operation. On the other hand, the growth in number of wafer processing steps is adding a considerable new source of noise which may have a significant impact at the nanometer scale. This paper explores the ability of historical process data and Machine Learning to support field engineers in production analysis and monitoring. We implement an automated workflow in order to analyze a large volume of information, and build a predictive model of overlay variation. The proposed workflow addresses significant problems that are typical in fab production, like missing measurements, small number of samples, confounding effects due to heterogeneity of data, and subpopulation effects. We evaluate the proposed workflow on a real usecase and we show that it is able to predict overlay excursions observed in Integrated Circuits manufacturing. The chosen design focuses on linear and interpretable models of the wafer history, which highlight the process steps that are causing defective products. This is a fundamental feature for diagnostics, as it supports process engineers in the continuous improvement of the production line.
Inflammasome Priming in Sterile Inflammatory Disease.
Patel, Meghana N; Carroll, Richard G; Galván-Peña, Silvia; Mills, Evanna L; Olden, Robin; Triantafilou, Martha; Wolf, Amaya I; Bryant, Clare E; Triantafilou, Kathy; Masters, Seth L
2017-02-01
The inflammasome is a cytoplasmic protein complex that processes interleukins (IL)-1β and IL-18, and drives a form of cell death known as pyroptosis. Oligomerization of this complex is actually the second step of activation, and a priming step must occur first. This involves transcriptional upregulation of pro-IL-1β, inflammasome sensor NLRP3, or the non-canonical inflammasome sensor caspase-11. An additional aspect of priming is the post-translational modification of particular inflammasome constituents. Priming is typically accomplished in vitro using a microbial Toll-like receptor (TLR) ligand. However, it is now clear that inflammasomes are activated during the progression of sterile inflammatory diseases such as atherosclerosis, metabolic disease, and neuroinflammatory disorders. Therefore, it is time to consider the endogenous factors and mechanisms that may prime the inflammasome in these conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
A two-step approach for mining patient treatment pathways in administrative healthcare databases.
Najjar, Ahmed; Reinharz, Daniel; Girouard, Catherine; Gagné, Christian
2018-05-01
Clustering electronic medical records allows the discovery of information on healthcare practices. Entries in such medical records are usually composed of a succession of diagnostics or therapeutic steps. The corresponding processes are complex and heterogeneous since they depend on medical knowledge integrating clinical guidelines, the physician's individual experience, and patient data and conditions. To analyze such data, we are first proposing to cluster medical visits, consultations, and hospital stays into homogeneous groups, and then to construct higher-level patient treatment pathways over these different groups. These pathways are then also clustered to distill typical pathways, enabling interpretation of clusters by experts. This approach is evaluated on a real-world administrative database of elderly people in Québec suffering from heart failures. Copyright © 2018 Elsevier B.V. All rights reserved.
Pipolo, Marco; Martins, Rui C; Quinta-Ferreira, Rosa M; Costa, Raquel
2017-03-01
The discharge of poorly decontaminated winery wastewater remains a serious environmental problem in many regions, and the industry is welcoming improved treatment methods. Here, an innovative decontamination approach integrating Fenton's process with biofiltration by Asian clams is proposed. The potential of this approach was assessed at the pilot scale using real effluent and by taking an actual industrial treatment system as a benchmark. Fenton peroxidation was observed to remove 84% of the effluent's chemical oxygen demand (COD), reducing it to 205 mg L. Subsequent biofiltration decreased the effluent's COD to approximately zero, well below the legal discharge limit of 150 mg L, in just 3 d. The reduction of the effluent's organic load through Fenton's process did not decrease its toxicity toward , but the effluent was much less harmful after biofiltration. The performance of the treatment proposed exceeded that of the integrated Fenton's process-sequencing batch reactor design implemented in the winery practice, where a residence time of around 10 d in the biological step typically results in 80 to 90% of COD removal. The method proposed is effective and compatible with typical winery budgets and potentially contributes to the management of a nuisance species. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Quantum state conversion in opto-electro-mechanical systems via shortcut to adiabaticity
NASA Astrophysics Data System (ADS)
Zhou, Xiao; Liu, Bao-Jie; Shao, L.-B.; Zhang, Xin-Ding; Xue, Zheng-Yuan
2017-09-01
Adiabatic processes have found many important applications in modern physics, the distinct merit of which is that accurate control over process timing is not required. However, such processes are slow, which limits their application in quantum computation, due to the limited coherent times of typical quantum systems. Here, we propose a scheme to implement quantum state conversion in opto-electro-mechanical systems via a shortcut to adiabaticity, where the process can be greatly speeded up while precise timing control is still not necessary. In our scheme, by modifying only the coupling strength, we can achieve fast quantum state conversion with high fidelity, where the adiabatic condition does not need to be met. In addition, the population of the unwanted intermediate state can be further suppressed. Therefore, our protocol presents an important step towards practical state conversion between optical and microwave photons, and thus may find many important applications in hybrid quantum information processing.
Case study: Lockheed-Georgia Company integrated design process
NASA Technical Reports Server (NTRS)
Waldrop, C. T.
1980-01-01
A case study of the development of an Integrated Design Process is presented. The approach taken in preparing for the development of an integrated design process includes some of the IPAD approaches such as developing a Design Process Model, cataloging Technical Program Elements (TPE's), and examining data characteristics and interfaces between contiguous TPE's. The implementation plan is based on an incremental development of capabilities over a period of time with each step directed toward, and consistent with, the final architecture of a total integrated system. Because of time schedules and different computer hardware, this system will not be the same as the final IPAD release; however, many IPAD concepts will no doubt prove applicable as the best approach. Full advantage will be taken of the IPAD development experience. A scenario that could be typical for many companies, even outside the aerospace industry, in developing an integrated design process for an IPAD-type environment is represented.
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2014-11-25
This is a method to reactively refine hydrocarbons, such as heavy oils with API gravities of less than 20.degree. and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure using a selected fluid at supercritical conditions. The reaction portion of the method delivers lighter weight, more volatile hydrocarbons to an attached contacting device that operates in mixed subcritical or supercritical modes. This separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques. This method produces valuable products with fewer processing steps, lower costs, increased worker safety due to less processing and handling, allow greater opportunity for new oil field development and subsequent positive economic impact, reduce related carbon dioxide, and wastes typical with conventional refineries.
Evolution of Volatile Compounds during the Distillation of Cognac Spirit.
Awad, Pierre; Athès, Violaine; Decloux, Martine Esteban; Ferrari, Gérald; Snakkers, Guillaume; Raguenaud, Patrick; Giampaoli, Pierre
2017-09-06
Cognac wine spirit has a complex composition in volatile compounds which contributes to its organoleptic profile. This work focused on the batch distillation process and, in particular, on volatile compounds specifically produced by chemical reactions during the distillation of Cognac wine spirit, traditionally conducted in two steps with charentais pot stills. The aim of this study was to characterize these volatile compounds formed during distillation. Sampling has been performed on the distillates and inside the boiler during a typical Cognac distillation. The analysis of these samples allowed us to perform a mass balance and to point out several types of volatile compounds whose quantities strongly increased during the distillation process. These compounds were distinguished by their chemical family. It has been found that the first distillation step was decisive for the formation of volatile compounds. Moreover, 2 esters, 3 aldehydes, 12 norisoprenoids, and 3 terpenes were shown to be generated during the process. These results suggest that some volatile compounds found in Cognac spirit are formed during distillation due to chemical reactions induced by high temperature. These findings give important indications to professional distillers in order to enhance the product's quality.
Cervera-Padrell, Albert E; Skovby, Tommy; Kiil, Søren; Gani, Rafiqul; Gernaey, Krist V
2012-10-01
A systematic framework is proposed for the design of continuous pharmaceutical manufacturing processes. Specifically, the design framework focuses on organic chemistry based, active pharmaceutical ingredient (API) synthetic processes, but could potentially be extended to biocatalytic and fermentation-based products. The method exploits the synergic combination of continuous flow technologies (e.g., microfluidic techniques) and process systems engineering (PSE) methods and tools for faster process design and increased process understanding throughout the whole drug product and process development cycle. The design framework structures the many different and challenging design problems (e.g., solvent selection, reactor design, and design of separation and purification operations), driving the user from the initial drug discovery steps--where process knowledge is very limited--toward the detailed design and analysis. Examples from the literature of PSE methods and tools applied to pharmaceutical process design and novel pharmaceutical production technologies are provided along the text, assisting in the accumulation and interpretation of process knowledge. Different criteria are suggested for the selection of batch and continuous processes so that the whole design results in low capital and operational costs as well as low environmental footprint. The design framework has been applied to the retrofit of an existing batch-wise process used by H. Lundbeck A/S to produce an API: zuclopenthixol. Some of its batch operations were successfully converted into continuous mode, obtaining higher yields that allowed a significant simplification of the whole process. The material and environmental footprint of the process--evaluated through the process mass intensity index, that is, kg of material used per kg of product--was reduced to half of its initial value, with potential for further reduction. The case-study includes reaction steps typically used by the pharmaceutical industry featuring different characteristic reaction times, as well as L-L separation and distillation-based solvent exchange steps, and thus constitutes a good example of how the design framework can be useful to efficiently design novel or already existing API manufacturing processes taking advantage of continuous processes. Copyright © 2012 Elsevier B.V. All rights reserved.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Stevenson, Joel O'Don; Ward, Pamela Peardon Denise
2001-01-01
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discernible portion thereof (e.g., a plasma step of a multiple step plasma recipe). A final aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Stevenson, Joel O'Don; Ward, Pamela Peardon Denise
2001-01-01
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discemible portion thereof (e.g., a plasma step of a multiple step plasma recipe). A final aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Stevenson, Joel O'Don; Ward, Pamela Peardon Denise
2000-01-01
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discernible portion thereof (e.g., a plasma step of a multiple step plasma recipe). A final aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Stevenson, Joel O'Don; Ward, Pamela Peardon Denise
2002-07-16
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discernible portion thereof (e.g., a plasma step of a multiple step plasma recipe). A final aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system.
The lignol approach to biorefining of woody biomass to produce ethanol and chemicals.
Arato, Claudio; Pye, E Kendall; Gjennestad, Gordon
2005-01-01
Processes that produce only ethanol from lignocellulosics display poor economics. This is generally overcome by constructing large facilities having satisfactory economies of scale, thus making financing onerous and hindering the development of suitable technologies. Lignol Innovations has developed a biorefining technology that employs an ethanol-based organosolv step to separate lignin, hemicellulose components, and extractives from the cellulosic fraction of woody biomass. The resultant cellulosic fraction is highly susceptible to enzymatic hydrolysis, generating very high yields of glucose (>90% in 12-24 h) with typical enzyme loadings of 10-20 FPU (filter paper units)/g. This glucose is readily converted to ethanol, or possibly other sugar platform chemicals, either by sequential or simultaneous saccharification and fermentation. The liquor from the organosolv step is processed by well-established unit operations to recover lignin, furfural, xylose, acetic acid, and a lipophylic extractives fraction. The process ethanol is recovered and recycled back to the process. The resulting recycled process water is of a very high quality, low BOD5, and suitable for overall system process closure. Significant benefits can be attained in greenhouse gas (GHG) emission reductions, as per the Kyoto Protocol. Revenues from the multiple products, particularly the lignin, ethanol and xylose fractions, ensure excellent economics for the process even in plants as small as 100 mtpd (metric tonnes per day) dry woody biomass input a scale suitable for processing wood residues produced by a single large sawmill.
How many steps/day are enough? For adults.
Tudor-Locke, Catrine; Craig, Cora L; Brown, Wendy J; Clemes, Stacy A; De Cocker, Katrien; Giles-Corti, Billie; Hatano, Yoshiro; Inoue, Shigeru; Matsudo, Sandra M; Mutrie, Nanette; Oppert, Jean-Michel; Rowe, David A; Schmidt, Michael D; Schofield, Grant M; Spence, John C; Teixeira, Pedro J; Tully, Mark A; Blair, Steven N
2011-07-28
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day.
How many steps/day are enough? for adults
2011-01-01
Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA). Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation) produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding) value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in MVPA that also include estimates of habitual activity levels equate to 7,100 to 11,000 steps/day. A direct estimate of minimal amounts of MVPA accumulated in the course of objectively monitored free-living behaviour is 7,000-8,000 steps/day. A scale that spans a wide range of incremental increases in steps/day and is congruent with public health recognition that "some physical activity is better than none," yet still incorporates step-based translations of recommended amounts of time in MVPA may be useful in research and practice. The full range of users (researchers to practitioners to the general public) of objective monitoring instruments that provide step-based outputs require good reference data and evidence-based recommendations to be able to design effective health messages congruent with public health physical activity guidelines, guide behaviour change, and ultimately measure, track, and interpret steps/day. PMID:21798015
Image simulation for automatic license plate recognition
NASA Astrophysics Data System (ADS)
Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José
2012-01-01
Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.
Overview of the production of sintered SiC optics and optical sub-assemblies
NASA Astrophysics Data System (ADS)
Williams, S.; Deny, P.
2005-08-01
The following is an overview on sintered silicon carbide (SSiC) material properties and processing requirements for the manufacturing of components for advanced technology optical systems. The overview will compare SSiC material properties to typical materials used for optics and optical structures. In addition, it will review manufacturing processes required to produce optical components in detail by process step. The process overview will illustrate current manufacturing process and concepts to expand the process size capability. The overview will include information on the substantial capital equipment employed in the manufacturing of SSIC. This paper will also review common in-process inspection methodology and design rules. The design rules are used to improve production yield, minimize cost, and maximize the inherent benefits of SSiC for optical systems. Optimizing optical system designs for a SSiC manufacturing process will allow systems designers to utilize SSiC as a low risk, cost competitive, and fast cycle time technology for next generation optical systems.
One-step growth of thin film SnS with large grains using MOCVD.
Clayton, Andrew J; Charbonneau, Cecile M E; Tsoi, Wing C; Siderfin, Peter J; Irvine, Stuart J C
2018-01-01
Thin film tin sulphide (SnS) films were produced with grain sizes greater than 1 μm using a one-step metal organic chemical vapour deposition process. Tin-doped indium oxide (ITO) was used as the substrate, having a similar work function to molybdenum typically used as the back contact, but with potential use of its transparency for bifacial illumination. Tetraethyltin and ditertiarybutylsulphide were used as precursors with process temperatures 430-470 °C to promote film growth with large grains. The film stoichiometry was controlled by varying the precursor partial pressure ratios and characterised with energy dispersive X-ray spectroscopy to optimise the SnS composition. X-ray diffraction and Raman spectroscopy were used to determine the phases that were present in the film and revealed that small amounts of ottemannite Sn 2 S 3 was present when SnS was deposited on to the ITO using optimised growth parameters. Interaction at the SnS/ITO interface to form Sn 2 S 3 was deduced to have resulted for all growth conditions.
Jiang, Shuzhen; Guo, Zhongning; Liu, Guixian; Gyimah, Glenn Kwabena; Li, Xiaoying; Dong, Hanshan
2017-10-25
Inspired by some typical plants such as lotus leaves, superhydrophobic surfaces are commonly prepared by a combination of low surface energy materials and hierarchical micro/nano structures. In this work, superhydrophobic surfaces on copper substrates were prepared by a rapid, facile one-step pulse electrodepositing process, with different duty ratios in an electrolyte containing lanthanum chloride (LaCl₃·6H₂O), myristic acid (CH₃(CH₂) 12 COOH), and ethanol. The equivalent electrolytic time was only 10 min. The surface morphology, chemical composition and superhydrophobic property of the pulse electrodeposited surfaces were fully investigated with SEM, EDX, XRD, contact angle meter and time-lapse photographs of water droplets bouncing method. The results show that the as-prepared surfaces have micro/nano dual scale structures mainly consisting of La[CH₃(CH₂) 12 COO]₃ crystals. The maximum water contact angle (WCA) is about 160.9°, and the corresponding sliding angle is about 5°. This method is time-saving and can be easily extended to other conductive materials, having a great potential for future applications.
Jiang, Shuzhen; Guo, Zhongning; Liu, Guixian; Gyimah, Glenn Kwabena; Li, Xiaoying; Dong, Hanshan
2017-01-01
Inspired by some typical plants such as lotus leaves, superhydrophobic surfaces are commonly prepared by a combination of low surface energy materials and hierarchical micro/nano structures. In this work, superhydrophobic surfaces on copper substrates were prepared by a rapid, facile one-step pulse electrodepositing process, with different duty ratios in an electrolyte containing lanthanum chloride (LaCl3·6H2O), myristic acid (CH3(CH2)12COOH), and ethanol. The equivalent electrolytic time was only 10 min. The surface morphology, chemical composition and superhydrophobic property of the pulse electrodeposited surfaces were fully investigated with SEM, EDX, XRD, contact angle meter and time-lapse photographs of water droplets bouncing method. The results show that the as-prepared surfaces have micro/nano dual scale structures mainly consisting of La[CH3(CH2)12COO]3 crystals. The maximum water contact angle (WCA) is about 160.9°, and the corresponding sliding angle is about 5°. This method is time-saving and can be easily extended to other conductive materials, having a great potential for future applications. PMID:29068427
EVA Development and Verification Testing at NASA's Neutral Buoyancy Laboratory
NASA Technical Reports Server (NTRS)
Jairala, Juniper; Durkin, Robert
2012-01-01
As an early step in preparing for future EVAs, astronauts perform neutral buoyancy testing to develop and verify EVA hardware and operations. To date, neutral buoyancy demonstrations at NASA JSC’s Sonny Carter Training Facility have primarily evaluated assembly and maintenance tasks associated with several elements of the ISS. With the retirement of the Space Shuttle, completion of ISS assembly, and introduction of commercial participants for human transportation into space, evaluations at the NBL will take on a new focus. In this session, Juniper Jairala briefly discussed the design of the NBL and, in more detail, described the requirements and process for performing a neutral buoyancy test, including typical hardware and support equipment requirements, personnel and administrative resource requirements, examples of ISS systems and operations that are evaluated, and typical operational objectives that are evaluated. Robert Durkin discussed the new and potential types of uses for the NBL, including those by non-NASA external customers.
EVA Development and Verification Testing at NASA's Neutral Buoyancy Laboratory
NASA Technical Reports Server (NTRS)
Jairala, Juniper; Durkin, Robert
2012-01-01
As an early step in preparing for future EVAs, astronauts perform neutral buoyancy testing to develop and verify EVA hardware and operations. To date, neutral buoyancy demonstrations at NASA JSC's Sonny Carter Training Facility have primarily evaluated assembly and maintenance tasks associated with several elements of the ISS. With the retirement of the Space Shuttle, completion of ISS assembly, and introduction of commercial participants for human transportation into space, evaluations at the NBL will take on a new focus. In this session, Juniper Jairala briefly discussed the design of the NBL and, in more detail, described the requirements and process for performing a neutral buoyancy test, including typical hardware and support equipment requirements, personnel and administrative resource requirements, examples of ISS systems and operations that are evaluated, and typical operational objectives that are evaluated. Robert Durkin discussed the new and potential types of uses for the NBL, including those by non-NASA external customers.
Processing Satellite Images on Tertiary Storage: A Study of the Impact of Tile Size on Performance
NASA Technical Reports Server (NTRS)
Yu, JieBing; DeWitt, David J.
1996-01-01
Before raw data from a satellite can be used by an Earth scientist, it must first undergo a number of processing steps including basic processing, cleansing, and geo-registration. Processing actually expands the volume of data collected by a factor of 2 or 3 and the original data is never deleted. Thus processing and storage requirements can exceed 2 terrabytes/day. Once processed data is ready for analysis, a series of algorithms (typically developed by the Earth scientists) is applied to a large number of images in a data set. The focus of this paper is how best to handle such images stored on tape using the following assumptions: (1) all images of interest to a scientist are stored on a single tape, (2) images are accessed and processed in the order that they are stored on tape, and (3) the analysis requires access to only a portion of each image and not the entire image.
NASA Astrophysics Data System (ADS)
Cullen, Andrew T.; Price, Aaron D.
2017-04-01
Electropolymerization of pyrrole is commonly employed to fabricate intrinsically conductive polymer films that exhibit desirable electromechanical properties. Due to their monolithic nature, electroactive polypyrrole films produced via this process are typically limited to simple linear or bending actuation modes, which has hindered their application in complex actuation tasks. This initiative aims to develop the specialized fabrication methods and polymer formulations required to realize three-dimensional conductive polymer structures capable of more elaborate actuation modes. Our group has previously reported the application of the digital light processing additive manufacturing process for the fabrication of three-dimensional conductive polymer structures using ultraviolet radiation. In this investigation, we further expand upon this initial work and present an improved polymer formulation designed for digital light processing additive manufacturing using visible light. This technology enables the design of novel electroactive polymer sensors and actuators with enhanced capabilities and brings us one step closer to realizing more advanced electroactive polymer enabled devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederix, Marijke; Mingardon, Florence; Hu, Matthew
Biological production of chemicals and fuels using microbial transformation of sustainable carbon sources, such as pretreated and saccharified plant biomass, is a multi-step process. Typically, each segment of the workflow is optimized separately, often generating conditions that may not be suitable for integration or consolidation with the upstream or downstream steps. While significant effort has gone into developing solutions to incompatibilities at discrete steps, very few studies report the consolidation of the multi-step workflow into a single pot reactor system. Here we demonstrate a one-pot biofuel production process that uses the ionic liquid 1-ethyl-3-methylimidazolium acetate (C 2C 1Im][OAc] ) formore » pretreatment of switchgrass biomass. [C 2C 1Im][OAc] is highly effective in deconstructing lignocellulose, but nonetheless leaves behind residual reagents that are toxic to standard saccharification enzymes and the microbial production host. We report the discovery of an [C 2C 1Im]-tolerant E. coli strain, where [C 2C 1Im] tolerance is bestowed by a P7Q mutation in the transcriptional regulator encoded by rcdA. We establish that the causal impact of this mutation is the derepression of a hitherto uncharacterized major facilitator family transporter, YbjJ. To develop the strain for a one-pot process we engineered this [C 2C 1Im]-tolerant strain to express a recently reported d-limonene production pathway. We also screened previously reported [C 2C 1Im]-tolerant cellulases to select one that would function with the range of E. coli cultivation conditions and expressed it in the [C 2C 1 Im]-tolerant E. coli strain so as to secrete this [C 2C 1Im]-tolerant cellulase. The final strain digests pretreated biomass, and uses the liberated sugars to produce the bio-jet fuel candidate precursor d-limonene in a one-pot process.« less
Kinahan, David J; Kearney, Sinéad M; Dimov, Nikolay; Glynn, Macdara T; Ducrée, Jens
2014-07-07
The centrifugal "lab-on-a-disc" concept has proven to have great potential for process integration of bioanalytical assays, in particular where ease-of-use, ruggedness, portability, fast turn-around time and cost efficiency are of paramount importance. Yet, as all liquids residing on the disc are exposed to the same centrifugal field, an inherent challenge of these systems remains the automation of multi-step, multi-liquid sample processing and subsequent detection. In order to orchestrate the underlying bioanalytical protocols, an ample palette of rotationally and externally actuated valving schemes has been developed. While excelling with the level of flow control, externally actuated valves require interaction with peripheral instrumentation, thus compromising the conceptual simplicity of the centrifugal platform. In turn, for rotationally controlled schemes, such as common capillary burst valves, typical manufacturing tolerances tend to limit the number of consecutive laboratory unit operations (LUOs) that can be automated on a single disc. In this paper, a major advancement on recently established dissolvable film (DF) valving is presented; for the very first time, a liquid handling sequence can be controlled in response to completion of preceding liquid transfer event, i.e. completely independent of external stimulus or changes in speed of disc rotation. The basic, event-triggered valve configuration is further adapted to leverage conditional, large-scale process integration. First, we demonstrate a fluidic network on a disc encompassing 10 discrete valving steps including logical relationships such as an AND-conditional as well as serial and parallel flow control. Then we present a disc which is capable of implementing common laboratory unit operations such as metering and selective routing of flows. Finally, as a pilot study, these functions are integrated on a single disc to automate a common, multi-step lab protocol for the extraction of total RNA from mammalian cell homogenate.
Frederix, Marijke; Mingardon, Florence; Hu, Matthew; ...
2016-04-11
Biological production of chemicals and fuels using microbial transformation of sustainable carbon sources, such as pretreated and saccharified plant biomass, is a multi-step process. Typically, each segment of the workflow is optimized separately, often generating conditions that may not be suitable for integration or consolidation with the upstream or downstream steps. While significant effort has gone into developing solutions to incompatibilities at discrete steps, very few studies report the consolidation of the multi-step workflow into a single pot reactor system. Here we demonstrate a one-pot biofuel production process that uses the ionic liquid 1-ethyl-3-methylimidazolium acetate (C 2C 1Im][OAc] ) formore » pretreatment of switchgrass biomass. [C 2C 1Im][OAc] is highly effective in deconstructing lignocellulose, but nonetheless leaves behind residual reagents that are toxic to standard saccharification enzymes and the microbial production host. We report the discovery of an [C 2C 1Im]-tolerant E. coli strain, where [C 2C 1Im] tolerance is bestowed by a P7Q mutation in the transcriptional regulator encoded by rcdA. We establish that the causal impact of this mutation is the derepression of a hitherto uncharacterized major facilitator family transporter, YbjJ. To develop the strain for a one-pot process we engineered this [C 2C 1Im]-tolerant strain to express a recently reported d-limonene production pathway. We also screened previously reported [C 2C 1Im]-tolerant cellulases to select one that would function with the range of E. coli cultivation conditions and expressed it in the [C 2C 1 Im]-tolerant E. coli strain so as to secrete this [C 2C 1Im]-tolerant cellulase. The final strain digests pretreated biomass, and uses the liberated sugars to produce the bio-jet fuel candidate precursor d-limonene in a one-pot process.« less
Kopský, Vojtech
2006-03-01
This article is a roadmap to a systematic calculation and tabulation of tensorial covariants for the point groups of material physics. The following are the essential steps in the described approach to tensor calculus. (i) An exact specification of the considered point groups by their embellished Hermann-Mauguin and Schoenflies symbols. (ii) Introduction of oriented Laue classes of magnetic point groups. (iii) An exact specification of matrix ireps (irreducible representations). (iv) Introduction of so-called typical (standard) bases and variables -- typical invariants, relative invariants or components of the typical covariants. (v) Introduction of Clebsch-Gordan products of the typical variables. (vi) Calculation of tensorial covariants of ascending ranks with consecutive use of tables of Clebsch-Gordan products. (vii) Opechowski's magic relations between tensorial decompositions. These steps are illustrated for groups of the tetragonal oriented Laue class D(4z) -- 4(z)2(x)2(xy) of magnetic point groups and for tensors up to fourth rank.
A method for real-time generation of augmented reality work instructions via expert movements
NASA Astrophysics Data System (ADS)
Bhattacharya, Bhaskar; Winer, Eliot
2015-03-01
Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method's viability.
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
NASA Astrophysics Data System (ADS)
Thongrom, Sukrit; Tirawanichakul, Yutthana; Munsit, Nantakan; Deangngam, Chalongrat
2018-02-01
We demonstrate a rapid and environmental friendly fabrication technique to produce optically clear superhydrophobic surfaces using poly (dimethylsiloxane) (PDMS) as a sole coating material. The inert PDMS chain is transformed into a 3-D irregular solid network through microwave plasma enhanced chemical vapor deposition (MW-PECVD) process. Thanks to high electron density in the microwave-activated plasma, coating can be done in just a single step with rapid deposition rate, typically much shorter than 10 s. Deposited layers show excellent superhydrophobic properties with water contact angles of ∼170° and roll-off angles as small as ∼3°. The plasma-deposited films can be ultrathin with thicknesses under 400 nm, greatly diminishing the optical loss. Moreover, with appropriate coating conditions, the coating layer can even enhance the transmission over the entire visible spectrum due to a partial anti-reflection effect.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Heat damaged forages: effects on forage energy content
USDA-ARS?s Scientific Manuscript database
Traditionally, educational materials describing the effects of heat damage within baled hays have focused on reduced bioavailability of crude protein as a result of Maillard reactions. These reactions are not simple, but actually occur in complex, multi-step pathways. Typically, the initial step inv...
Challenging evidence-based decision-making: a hypothetical case study about return to work.
Aas, Randi W; Alexanderson, Kristina
2012-03-01
A hypothetical case study about return to work was used to explore the process of translating research into practice. The method involved constructing a case study derived from the characteristics of a typical, sick-listed employee with non-specific low back pain in Norway. Next, the five-step evidence-based process, including the Patient, Intervention, Co-Interventions and Outcome framework (PICO), was applied to the case study. An inductive analysis produced 10 technical and more fundamental challenges to incorporate research into intervention decisions for an individual with comorbidity. A more dynamic, interactive approach to the evidence-based practice process is proposed. It is recommended that this plus the 10 challenges are validated with real life cases, as the hypothetical case study may not be replicable. Copyright © 2011 John Wiley & Sons, Ltd.
Plastic optical fibre sensor for Madeira wine monitoring
NASA Astrophysics Data System (ADS)
Novo, C.; Bilro, L.; Alberto, N.; Antunes, P.; Nogueira, R.; Pinto, J. L.
2014-08-01
Madeira wine is a fortified wine produced in Madeira Island, Portugal. Its characteristics are strongly influenced by the winemaking method used which includes a typical and unique step called estufagem. This process consists on heating the wine up to 55 ºC for at least 3 months. In this paper, the characterization of the sensor for the pilot scale facility of estufagem installed in Madeira University is presented, being the device an optimization of a previous version. The response of the sensor was tested towards colour and refractive index, showing a good performance. Madeira wine with different estufagem times was also analysed.
Prototype wash water renovation system integration with government-furnished wash fixture
NASA Technical Reports Server (NTRS)
1984-01-01
The requirements of a significant quantity of proposed life sciences experiments in Shuttle payloads for available wash water to support cleansing operations has provided the incentive to develop a technique for wash water renovation. A prototype wash water waste renovation system which has the capability to process the waste water and return it to a state adequate for reuse in a typical cleansing fixture designed to support life science experiments was investigated. The resulting technology is to support other developments efforts pertaining to water reclamation by serving as a pretreatment step for subsequent reclamation procedures.
SEM evaluation of metallization on semiconductors. [Scanning Electron Microscope
NASA Technical Reports Server (NTRS)
Fresh, D. L.; Adolphsen, J. W.
1974-01-01
A test method for the evaluation of metallization on semiconductors is presented and discussed. The method has been prepared in MIL-STD format for submittal as a proposed addition to MIL-STD-883. It is applicable to discrete devices and to integrated circuits and specifically addresses batch-process oriented defects. Quantitative accept/reject criteria are given for contact windows, other oxide steps, and general interconnecting metallization. Figures are provided that illustrate typical types of defects. Apparatus specifications, sampling plans, and specimen preparation and examination requirements are described. Procedures for glassivated devices and for multi-metal interconnection systems are included.
A Case Study of a Combat Aircraft’s Single Hit Vulnerability
1986-09-01
Survivability Life Cycle 21 3.2 Interfaces of the FMECA Process 27 3.3 Example FMEA Format 29 3.4 Example DMEA Matrix 33 3.5 Example Disablement Diagram 34...Typical Hi-Hi/Hi-Hi Mission 58 5.5 A-20 Conceptual Tactics 60 7.1 A-20 Fuel System 73 7.2 A-20 Hydraulics System 75 7.3 A-20 Flight Controls System 77 7.4...effect severity. The FMECA procedure is performed in two steps, (1) a Fail- ure Mode and Effects Analysis ( FMEA ) and (2) a Damage Mode and Effects
Occupational asthma in the commercial fishing industry: a case series and review of the literature.
Lucas, David; Lucas, Raymond; Boniface, Keith; Jegaden, Dominique; Lodde, Brice; Dewitte, Jean-Ariel
2010-01-01
We present a case series of snow crab-induced occupational asthma (OA) from a fishing and processing vessel, followed by a review of OA in the commercial fishing industry. OA is typically caused from an IgE-mediated hypersensitivity reaction after respiratory exposure to aerosolized fish and shellfish proteins. It more commonly occurs due to crustaceans, but molluscs and fin fish are implicated as well. Standard medical therapy for asthma may be used acutely; however, steps to reduce atmospheric allergen concentrations in the workplace have proven to be preventive for this disease.
cisTEM, user-friendly software for single-particle image processing.
Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus
2018-03-07
We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.
cisTEM, user-friendly software for single-particle image processing
2018-01-01
We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216
Melendez, Johan H; Santaus, Tonya M; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A; Geddes, Chris D
2016-10-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by detection of the genomic target often involving polymerase chain reaction (PCR)-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (gonorrhea, GC) DNA. Our approach is based on the use of highly focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the current study, we show that highly focused microwaves at 2.45 GHz, using 12.3-mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification, in less than 10 min total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward toward the development of a point-of-care (POC) platform for detection of gonorrhea infections. Copyright © 2016 Elsevier Inc. All rights reserved.
Bowersock, Collin D; Willy, Richard W; DeVita, Paul; Willson, John D
2017-03-01
Anterior cruciate ligament reconstruction is associated with early onset knee osteoarthritis. Running is a typical activity following this surgery, but elevated knee joint contact forces are thought to contribute to osteoarthritis degenerative processes. It is therefore clinically relevant to identify interventions to reduce contact forces during running among individuals after anterior cruciate ligament reconstruction. The primary purpose of this study was to evaluate the effect of reducing step length during running on patellofemoral and tibiofemoral joint contact forces among people with a history of anterior cruciate ligament reconstruction. Inter limb knee joint contact force differences during running were also examined. 18 individuals at an average of 54.8months after unilateral anterior cruciate ligament reconstruction ran in 3 step length conditions (preferred, -5%, -10%). Bilateral patellofemoral, tibiofemoral, and medial tibiofemoral compartment peak force, loading rate, impulse, and impulse per kilometer were evaluated between step length conditions and limbs using separate 2 factor analyses of variance. Reducing step length 5% decreased patellofemoral, tibiofemoral, and medial tibiofemoral compartment peak force, impulse, and impulse per kilometer bilaterally. A 10% step length reduction further decreased peak forces and force impulses, but did not further reduce force impulses per kilometer. Tibiofemoral joint impulse, impulse per kilometer, and patellofemoral joint loading rate were lower in the previously injured limb compared to the contralateral limb. Running with a shorter step length is a feasible clinical intervention to reduce knee joint contact forces during running among people with a history of anterior cruciate ligament reconstruction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Top Ten Reasons for DEOX as a Front End to Pyroprocessing
DOE Office of Scientific and Technical Information (OSTI.GOV)
B.R. Westphal; K.J. Bateman; S.D. Herrmann
A front end step is being considered to augment chopping during the treatment of spent oxide fuel by pyroprocessing. The front end step, termed DEOX for its emphasis on decladding via oxidation, employs high temperatures to promote the oxidation of UO2 to U3O8 via an oxygen carrier gas. During oxidation, the spent fuel experiences a 30% increase in lattice structure volume resulting in the separation of fuel from cladding with a reduced particle size. A potential added benefit of DEOX is the removal of fission products, either via direct release from the broken fuel structure or via oxidation and volatilizationmore » by the high temperature process. Fuel element chopping is the baseline operation to prepare spent oxide fuel for an electrolytic reduction step. Typical chopping lengths range from 1 to 5 mm for both individual elements and entire assemblies. During electrolytic reduction, uranium oxide is reduced to metallic uranium via a lithium molten salt. An electrorefining step is then performed to separate a majority of the fission products from the recoverable uranium. Although DEOX is based on a low temperature oxidation cycle near 500oC, additional conditions have been tested to distinguish their effects on the process.[1] Both oxygen and air have been utilized during the oxidation portion followed by vacuum conditions to temperatures as high as 1200oC. In addition, the effects of cladding on fission product removal have also been investigated with released fuel to temperatures greater than 500oC.« less
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Planning bioinformatics workflows using an expert system.
Chen, Xiaoling; Chang, Jeffrey T
2017-04-15
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Planning bioinformatics workflows using an expert system
Chen, Xiaoling; Chang, Jeffrey T.
2017-01-01
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928
A 32 x 32 capacitive micromachined ultrasonic transducer array manufactured in standard CMOS.
Lemmerhirt, David F; Cheng, Xiaoyang; White, Robert; Rich, Collin A; Zhang, Man; Fowlkes, J Brian; Kripfgans, Oliver D
2012-07-01
As ultrasound imagers become increasingly portable and lower cost, breakthroughs in transducer technology will be needed to provide high-resolution, real-time 3-D imaging while maintaining the affordability needed for portable systems. This paper presents a 32 x 32 ultrasound array prototype, manufactured using a CMUT-in-CMOS approach whereby ultrasonic transducer elements and readout circuits are integrated on a single chip using a standard integrated circuit manufacturing process in a commercial CMOS foundry. Only blanket wet-etch and sealing steps are added to complete the MEMS devices after the CMOS process. This process typically yields better than 99% working elements per array, with less than ±1.5 dB variation in receive sensitivity among the 1024 individually addressable elements. The CMUT pulseecho frequency response is typically centered at 2.1 MHz with a -6 dB fractional bandwidth of 60%, and elements are arranged on a 250 μm hexagonal grid (less than half-wavelength pitch). Multiplexers and CMOS buffers within the array are used to make on-chip routing manageable, reduce the number of physical output leads, and drive the transducer cable. The array has been interfaced to a commercial imager as well as a set of custom transmit and receive electronics, and volumetric images of nylon fishing line targets have been produced.
Feasibility of Surfactant-Free Supported Emulsion Liquid Membrane Extraction
NASA Technical Reports Server (NTRS)
Hu, Shih-Yao B.; Li, Jin; Wiencek, John M.
2001-01-01
Supported emulsion liquid membrane (SELM) is an effective means to conduct liquid-liquid extraction. SELM extraction is particularly attractive for separation tasks in the microgravity environment where density difference between the solvent and the internal phase of the emulsion is inconsequential and a stable dispersion can be maintained without surfactant. In this research, dispersed two-phase flow in SELM extraction is modeled using the Lagrangian method. The results show that SELM extraction process in the microgravity environment can be simulated on earth by matching the density of the solvent and the stripping phase. Feasibility of surfactant-free SELM (SFSELM) extraction is assessed by studying the coalescence behavior of the internal phase in the absence of the surfactant. Although the contacting area between the solvent and the internal phase in SFSELM extraction is significantly less than the area provided by regular emulsion due to drop coalescence, it is comparable to the area provided by a typical hollow-fiber membrane. Thus, the stripping process is highly unlikely to become the rate-limiting step in SFSELM extraction. SFSELM remains an effective way to achieve simultaneous extraction and stripping and is able to eliminate the equilibrium limitation in the typical solvent extraction processes. The SFSELM design is similar to the supported liquid membrane design in some aspects.
NASA Astrophysics Data System (ADS)
Cerezo, J.; Vandendael, I.; Posner, R.; de Wit, J. H. W.; Mol, J. M. C.; Terryn, H.
2016-03-01
This study investigates the effect of different alkaline, acidic and thermal pre-conditioning treatments applied to different Al alloy surfaces. The obtained results are compared to the characteristics of Zr-based conversion coatings that were subsequently generated on top of these substrates. Focus is laid on typical elemental distributions on the sample surfaces, in particular on the amount of precipitated functional additives such as Cu species that are present in the substrate matrix as well as in the conversion bath solutions. To this aim, Field Emission Auger Electron spectra, depth profiles and surface maps with superior local resolution were acquired and compared to scanning electron microscopy images of the sample. The results show how de-alloying processes, which occur at and around intermetallic particles in the Al matrix during typical industrial alkaline or acidic cleaning procedures, provide a significant source of crystallization cores for any following coating processes. This is in particular due for Cu-species, as the resulting local Cu structures on the surface strongly affect the film formation and compositions of state-of-the-art Zr-based films. The findings are highly relevant for industrial treatments of aluminium surfaces, especially for those that undergo corrosion protection and painting process steps prior to usage.
NASA Astrophysics Data System (ADS)
Korir, Peter C.; Dejene, Francis B.
2018-04-01
In this work two step growth process was used to prepare Cu(In, Ga)Se2 thin film for solar cell applications. The first step involves deposition of Cu-In-Ga precursor films followed by the selenization process under vacuum using elemental selenium vapor to form Cu(In,Ga)Se2 film. The growth process was done at a fixed temperature of 515 °C for 45, 60 and 90 min to control film thickness and gallium incorporation into the absorber layer film. The X-ray diffraction (XRD) pattern confirms single-phase Cu(In,Ga)Se2 film for all the three samples and no secondary phases were observed. A shift in the diffraction peaks to higher 2θ (2 theta) values is observed for the thin films compared to that of pure CuInSe2. The surface morphology of the resulting film grown for 60 min was characterized by the presence of uniform large grain size particles, which are typical for device quality material. Photoluminescence spectra show the shifting of emission peaks to higher energies for longer duration of selenization attributed to the incorporation of more gallium into the CuInSe2 crystal structure. Electron probe microanalysis (EPMA) revealed a uniform distribution of the elements through the surface of the film. The elemental ratio of Cu/(In + Ga) and Se/Cu + In + Ga strongly depends on the selenization time. The Cu/In + Ga ratio for the 60 min film is 0.88 which is in the range of the values (0.75-0.98) for best solar cell device performances.
Laser-based gluing of diamond-tipped saw blades
NASA Astrophysics Data System (ADS)
Hennigs, Christian; Lahdo, Rabi; Springer, André; Kaierle, Stefan; Hustedt, Michael; Brand, Helmut; Wloka, Richard; Zobel, Frank; Dültgen, Peter
2016-03-01
To process natural stone such as marble or granite, saw blades equipped with wear-resistant diamond grinding segments are used, typically joined to the blade by brazing. In case of damage or wear, they must be exchanged. Due to the large energy input during thermal loosening and subsequent brazing, the repair causes extended heat-affected zones with serious microstructure changes, resulting in shape distortions and disadvantageous stress distributions. Consequently, axial run-out deviations and cutting losses increase. In this work, a new near-infrared laser-based process chain is presented to overcome the deficits of conventional brazing-based repair of diamond-tipped steel saw blades. Thus, additional tensioning and straightening steps can be avoided. The process chain starts with thermal debonding of the worn grinding segments, using a continuous-wave laser to heat the segments gently and to exceed the adhesive's decomposition temperature. Afterwards, short-pulsed laser radiation removes remaining adhesive from the blade in order to achieve clean joining surfaces. The third step is roughening and activation of the joining surfaces, again using short-pulsed laser radiation. Finally, the grinding segments are glued onto the blade with a defined adhesive layer, using continuous-wave laser radiation. Here, the adhesive is heated to its curing temperature by irradiating the respective grinding segment, ensuring minimal thermal influence on the blade. For demonstration, a prototype unit was constructed to perform the different steps of the process chain on-site at the saw-blade user's facilities. This unit was used to re-equip a saw blade with a complete set of grinding segments. This saw blade was used successfully to cut different materials, amongst others granite.
Mapping of Technological Opportunities-Labyrinth Seal Example
NASA Technical Reports Server (NTRS)
Clarke, Dana W., Sr.
2006-01-01
All technological systems evolve based on evolutionary sequences that have repeated throughout history and can be abstracted from the history of technology and patents. These evolutionary sequences represent objective patterns and provide considerable insights that can be used to proactively model future seal concepts. This presentation provides an overview of how to map seal technology into the future using a labyrinth seal example. The mapping process delivers functional descriptions of sequential changes in market/consumer demand, from today s current paradigm to the next major paradigm shift. The future paradigm is developed according to a simple formula: the future paradigm is free of all flaws associated with the current paradigm; it is as far into the future as we can see. Although revolutionary, the vision of the future paradigm is typically not immediately or completely realizable nor is it normally seen as practical. There are several reasons that prevent immediate and complete practical application, such as: 1) Some of the required technological or business resources and knowledge not being available; 2) Availability of other technological or business resources are limited; and/or 3) Some necessary knowledge has not been completely developed. These factors tend to drive the Total Cost of Ownership or Utilization out of an acceptable range and revealing the reasons for the high Total Cost of Ownership or Utilization which provides a clear understanding of research opportunities essential for future developments and defines the current limits of the immediately achievable improvements. The typical roots of high Total Cost of Ownership or Utilization lie in the limited availability or even the absence of essential resources and knowledge necessary for its realization. In order to overcome this obstacle, step-by-step modification of the current paradigm is pursued to evolve from the current situation toward the ideal future, i.e., evolution rather than revolution. A key point is that evolutionary stages are mapped to show step-by-step evolution from the current paradigm to the next major paradigm.
Direct PCR amplification of forensic touch and other challenging DNA samples: A review.
Cavanaugh, Sarah E; Bathrick, Abigail S
2018-01-01
DNA evidence sample processing typically involves DNA extraction, quantification, and STR amplification; however, DNA loss can occur at both the DNA extraction and quantification steps, which is not ideal for forensic evidence containing low levels of DNA. Direct PCR amplification of forensic unknown samples has been suggested as a means to circumvent extraction and quantification, thereby retaining the DNA typically lost during those procedures. Direct PCR amplification is a method in which a sample is added directly to an amplification reaction without being subjected to prior DNA extraction, purification, or quantification. It allows for maximum quantities of DNA to be targeted, minimizes opportunities for error and contamination, and reduces the time and monetary resources required to process samples, although data analysis may take longer as the increased DNA detection sensitivity of direct PCR may lead to more instances of complex mixtures. ISO 17025 accredited laboratories have successfully implemented direct PCR for limited purposes (e.g., high-throughput databanking analysis), and recent studies indicate that direct PCR can be an effective method for processing low-yield evidence samples. Despite its benefits, direct PCR has yet to be widely implemented across laboratories for the processing of evidentiary items. While forensic DNA laboratories are always interested in new methods that will maximize the quantity and quality of genetic information obtained from evidentiary items, there is often a lag between the advent of useful methodologies and their integration into laboratories. Delayed implementation of direct PCR of evidentiary items can be attributed to a variety of factors, including regulatory guidelines that prevent laboratories from omitting the quantification step when processing forensic unknown samples, as is the case in the United States, and, more broadly, a reluctance to validate a technique that is not widely used for evidence samples. The advantages of direct PCR of forensic evidentiary samples justify a re-examination of the factors that have delayed widespread implementation of this method and of the evidence supporting its use. In this review, the current and potential future uses of direct PCR in forensic DNA laboratories are summarized. Copyright © 2017 Elsevier B.V. All rights reserved.
22. INTERIOR VIEW OF TYPICAL OIL TANK, PORT SIDE, LOOKING ...
22. INTERIOR VIEW OF TYPICAL OIL TANK, PORT SIDE, LOOKING FORWARD. NOTE SHIP'S FRAMING MEMBERS AND UNIVERSAL JOINT IN SHAFT FOR PUMPING VALVE AT BOTTOM OF TANK. THE ACCESS LADDER STEPS ARE MARKED WITH LEVEL INDICATIONS. - Ship "Falls of Clyde", Hawaii Maritime Center, Pier 7, Honolulu, Honolulu County, HI
Reduced Order Models for Reactions of Energetic Materials
NASA Astrophysics Data System (ADS)
Kober, Edward
The formulation of reduced order models for the reaction chemistry of energetic materials under high pressures is needed for the development of mesoscale models in the areas of initiation, deflagration and detonation. Phenomenologically, 4-8 step models have been formulated from the analysis of cook-off data by analyzing the temperature rise of heated samples. Reactive molecular dynamics simulations have been used to simulate many of these processes, but reducing the results of those simulations to simple models has not been achieved. Typically, these efforts have focused on identifying molecular species and detailing specific chemical reactions. An alternative approach is presented here that is based on identifying the coordination geometries of each atom in the simulation and tracking classes of reactions by correlated changes in these geometries. Here, every atom and type of reaction is documented for every time step; no information is lost from unsuccessful molecular identification. Principal Component Analysis methods can then be used to map out the effective chemical reaction steps. For HMX and TATB decompositions simulated with ReaxFF, 90% of the data can be explained by 4-6 steps, generating models similar to those from the cook-off analysis. By performing these simulations at a variety of temperatures and pressures, both the activation and reaction energies and volumes can then be extracted.
Dynamics of aesthetic appreciation
NASA Astrophysics Data System (ADS)
Carbon, Claus-Christian
2012-03-01
Aesthetic appreciation is a complex cognitive processing with inherent aspects of cold as well as hot cognition. Research from the last decades of empirical has shown that evaluations of aesthetic appreciation are highly reliable. Most frequently, facial attractiveness was used as the corner case for investigating aesthetic appreciation. Evaluating facial attractiveness shows indeed high internal consistencies and impressively high inter-rater reliabilities, even across cultures. Although this indicates general and stable mechanisms underlying aesthetic appreciation, it is also obvious that our taste for specific objects changes dynamically. Aesthetic appreciation on artificial object categories, such as fashion, design or art is inherently very dynamic. Gaining insights into the cognitive mechanisms that trigger and enable corresponding changes of aesthetic appreciation is of particular interest for research as this will provide possibilities to modeling aesthetic appreciation for longer durations and from a dynamic perspective. The present paper refers to a recent two-step model ("the dynamical two-step-model of aesthetic appreciation"), dynamically adapting itself, which accounts for typical dynamics of aesthetic appreciation found in different research areas such as art history, philosophy and psychology. The first step assumes singular creative sources creating and establishing innovative material towards which, in a second step, people adapt by integrating it into their visual habits. This inherently leads to dynamic changes of the beholders' aesthetic appreciation.
Descriptive Statistics of the Genome: Phylogenetic Classification of Viruses.
Hernandez, Troy; Yang, Jie
2016-10-01
The typical process for classifying and submitting a newly sequenced virus to the NCBI database involves two steps. First, a BLAST search is performed to determine likely family candidates. That is followed by checking the candidate families with the pairwise sequence alignment tool for similar species. The submitter's judgment is then used to determine the most likely species classification. The aim of this article is to show that this process can be automated into a fast, accurate, one-step process using the proposed alignment-free method and properly implemented machine learning techniques. We present a new family of alignment-free vectorizations of the genome, the generalized vector, that maintains the speed of existing alignment-free methods while outperforming all available methods. This new alignment-free vectorization uses the frequency of genomic words (k-mers), as is done in the composition vector, and incorporates descriptive statistics of those k-mers' positional information, as inspired by the natural vector. We analyze five different characterizations of genome similarity using k-nearest neighbor classification and evaluate these on two collections of viruses totaling over 10,000 viruses. We show that our proposed method performs better than, or as well as, other methods at every level of the phylogenetic hierarchy. The data and R code is available upon request.
Tran, Benjamin; Grosskopf, Vanessa; Wang, Xiangdan; Yang, Jihong; Walker, Don; Yu, Christopher; McDonald, Paul
2016-03-18
Purification processes for therapeutic antibodies typically exploit multiple and orthogonal chromatography steps in order to remove impurities, such as host-cell proteins. While the majority of host-cell proteins are cleared through purification processes, individual host-cell proteins such as Phospholipase B-like 2 (PLBL2) are more challenging to remove and can persist into the final purification pool even after multiple chromatography steps. With packed-bed chromatography runs using host-cell protein ELISAs and mass spectrometry analysis, we demonstrated that different therapeutic antibodies interact to varying degrees with host-cell proteins in general, and PLBL2 specifically. We then used a high-throughput Protein A chromatography method to further examine the interaction between our antibodies and PLBL2. Our results showed that the co-elution of PLBL2 during Protein A chromatography is highly dependent on the individual antibody and PLBL2 concentration in the chromatographic load. Process parameters such as antibody resin load density and pre-elution wash conditions also influence the levels of PLBL2 in the Protein A eluate. Furthermore, using surface plasmon resonance, we demonstrated that there is a preference for PLBL2 to interact with IgG4 subclass antibodies compared to IgG1 antibodies. Copyright © 2016 Elsevier B.V. All rights reserved.
Development of a method to analyze orthopaedic practice expenses.
Brinker, M R; Pierce, P; Siegel, G
2000-03-01
The purpose of the current investigation was to present a standard method by which an orthopaedic practice can analyze its practice expenses. To accomplish this, a five-step process was developed to analyze practice expenses using a modified version of activity-based costing. In this method, general ledger expenses were assigned to 17 activities that encompass all the tasks and processes typically performed in an orthopaedic practice. These 17 activities were identified in a practice expense study conducted for the American Academy of Orthopaedic Surgeons. To calculate the cost of each activity, financial data were used from a group of 19 orthopaedic surgeons in Houston, Texas. The activities that consumed the largest portion of the employee work force (person hours) were service patients in office (25.0% of all person hours), maintain medical records (13.6% of all person hours), and resolve collection disputes and rebill charges (12.3% of all person hours). The activities that comprised the largest portion of the total expenses were maintain facility (21.4%), service patients in office (16.0%), and sustain business by managing and coordinating practice (13.8%). The five-step process of analyzing practice expenses was relatively easy to perform and it may be used reliably by most orthopaedic practices.
Etch challenges for DSA implementation in CMOS via patterning
NASA Astrophysics Data System (ADS)
Pimenta Barros, P.; Barnola, S.; Gharbi, A.; Argoud, M.; Servin, I.; Tiron, R.; Chevalier, X.; Navarro, C.; Nicolet, C.; Lapeyre, C.; Monget, C.; Martinez, E.
2014-03-01
This paper reports on the etch challenges to overcome for the implementation of PS-b-PMMA block copolymer's Directed Self-Assembly (DSA) in CMOS via patterning level. Our process is based on a graphoepitaxy approach, employing an industrial PS-b-PMMA block copolymer (BCP) from Arkema with a cylindrical morphology. The process consists in the following steps: a) DSA of block copolymers inside guiding patterns, b) PMMA removal, c) brush layer opening and finally d) PS pattern transfer into typical MEOL or BEOL stacks. All results presented here have been performed on the DSA Leti's 300mm pilot line. The first etch challenge to overcome for BCP transfer involves in removing all PMMA selectively to PS block. In our process baseline, an acetic acid treatment is carried out to develop PMMA domains. However, this wet development has shown some limitations in terms of resists compatibility and will not be appropriated for lamellar BCPs. That is why we also investigate the possibility to remove PMMA by only dry etching. In this work the potential of a dry PMMA removal by using CO based chemistries is shown and compared to wet development. The advantages and limitations of each approach are reported. The second crucial step is the etching of brush layer (PS-r-PMMA) through a PS mask. We have optimized this step in order to preserve the PS patterns in terms of CD, holes features and film thickness. Several integrations flow with complex stacks are explored for contact shrinking by DSA. A study of CD uniformity has been addressed to evaluate the capabilities of DSA approach after graphoepitaxy and after etching.
Sensation-to-Cognition Cortical Streams in Attention-Deficit/Hyperactivity Disorder
Carmona, Susana; Hoekzema, Elseline; Castellanos, Francisco X.; García-García, David; Lage-Castellanos, Agustín; Dijk, Koene R.A.Van; Navas-Sánchez, Francisco J.; Martínez, Kenia; Desco, Manuel; Sepulcre, Jorge
2015-01-01
We sought to determine whether functional connectivity streams that link sensory, attentional, and higher-order cognitive circuits are atypical in attention-deficit/hyperactivity disorder (ADHD). We applied a graph-theory method to the resting-state functional magnetic resonance imaging data of 120 children with ADHD and 120 age-matched typically developing children (TDC). Starting in unimodal primary cortex—visual, auditory, and somatosensory—we used stepwise functional connectivity to calculate functional connectivity paths at discrete numbers of relay stations (or link-step distances). First, we characterized the functional connectivity streams that link sensory, attentional, and higher-order cognitive circuits in TDC and found that systems do not reach the level of integration achieved by adults. Second, we searched for stepwise functional connectivity differences between children with ADHD and TDC. We found that, at the initial steps of sensory functional connectivity streams, patients display significant enhancements of connectivity degree within neighboring areas of primary cortex, while connectivity to attention-regulatory areas is reduced. Third, at subsequent link-step distances from primary sensory cortex, children with ADHD show decreased connectivity to executive processing areas and increased degree of connections to default mode regions. Fourth, in examining medication histories in children with ADHD, we found that children medicated with psychostimulants present functional connectivity streams with higher degree of connectivity to regions subserving attentional and executive processes compared to medication-naïve children. We conclude that predominance of local sensory processing and lesser influx of information to attentional and executive regions may reduce the ability to organize and control the balance between external and internal sources of information in ADHD. PMID:25821110
Caredda, Marco; Addis, Margherita; Pes, Massimo; Fois, Nicola; Sanna, Gabriele; Piredda, Giovanni; Sanna, Gavino
2018-06-01
The aim of this work was to measure the physico-chemical and the colorimetric parameters of ovaries from Mugil cephalus caught in the Tortolì lagoon (South-East coast of Sardinia) along the steps of the manufacturing process of Bottarga, together with the rheological parameters of the final product. A lowering of all CIELab coordinates (lightness, redness and yellowness) was observed during the manufacture process. All CIELab parameters were used to build a Linear Discriminant Analysis (LDA) predictive model able to determine in real time if the roes had been subdued to a freezing process, with a success in prediction of 100%. This model could be used to identify the origin of the roes, since only the imported ones are frozen. The major changes of all the studied parameters (p < 0.05) were noted in the drying step rather than in the salting step. After processing, Bottarga was characterized by a pH value of 5.46 (CV = 2.8) and a moisture content of 25% (CV = 8), whereas the typical per cent amounts of proteins, fat and NaCl, calculated as a percentage on the dried weight, were 56 (CV = 2), 34 (CV = 3) and 3.6 (CV = 17), respectively. The physical chemical changes of the roes during the manufacturing process were consistent for moisture, which decreased by 28%, whereas the protein and the fat contents on the dried weight got respectively lower of 3% and 2%. NaCl content increased by 3.1%. Principal Component Analyses (PCA) were also performed on all data to establish trends and relationships among all parameters. Hardness and consistency of Bottarga were negatively correlated with the moisture content (r = -0.87 and r = -0.88, respectively), while its adhesiveness was negatively correlated with the fat content (r = -0.68). Copyright © 2018. Published by Elsevier Ltd.
Enhancing Health-Care Services with Mixed Reality Systems
NASA Astrophysics Data System (ADS)
Stantchev, Vladimir
This work presents a development approach for mixed reality systems in health care. Although health-care service costs account for 5-15% of GDP in developed countries the sector has been remarkably resistant to the introduction of technology-supported optimizations. Digitalization of data storing and processing in the form of electronic patient records (EPR) and hospital information systems (HIS) is a first necessary step. Contrary to typical business functions (e.g., accounting or CRM) a health-care service is characterized by a knowledge intensive decision process and usage of specialized devices ranging from stethoscopes to complex surgical systems. Mixed reality systems can help fill the gap between highly patient-specific health-care services that need a variety of technical resources on the one side and the streamlined process flow that typical process supporting information systems expect on the other side. To achieve this task, we present a development approach that includes an evaluation of existing tasks and processes within the health-care service and the information systems that currently support the service, as well as identification of decision paths and actions that can benefit from mixed reality systems. The result is a mixed reality system that allows a clinician to monitor the elements of the physical world and to blend them with virtual information provided by the systems. He or she can also plan and schedule treatments and operations in the digital world depending on status information from this mixed reality.
NASA Astrophysics Data System (ADS)
Bellanger, Véronique; Courcelle, Arnaud; Petit, Alain
2004-09-01
A program to compute the two-step excitation of sodium atoms ( 3S→3P→4D) using the density-matrix formalism is presented. The BEACON program calculates population evolution and the number of photons emitted by fluorescence from the 3P, 4D, 4P, 4S levels. Program summaryTitle of program: BEACON Catalogue identifier:ADSX Program Summary URL:http://cpc.cs.qub.ac.uk/cpc/summaries/ADSX Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Operating systems under which the program has been tested: Win; Unix Programming language used: FORTRAN 77 Memory required to execute with typical data: 1 Mw Number of bits in a word: 32 Number of processors used: 1 (a parallel version of this code is also available and can be obtained on request) Number of lines in distributed program, including test data, etc.: 29 287 Number of bytes in distributed program, including test data, etc.: 830 331 Distribution format: tar.gz CPC Program Library subprograms used: none Nature of physical problem: Resolution of the Bloch equations in the case of the two-step laser excitation of sodium atoms. Method of solution: The program BEACON calculates the evolution of level population versus time using the density-matrix formalism. The number of photons emitted from the 3P, 4D and 4P levels is calculated using the branching ratios and the level lifetimes. Restriction on the complexity of the problem: Since the backscatter emission is calculated after the excitation process, excitation with laser pulse duration longer than the 4D level lifetime cannot be rigorously treated. Particularly, cw laser excitation cannot be calculated with this code. Typical running time:12 h
Modelling to very high strains
NASA Astrophysics Data System (ADS)
Bons, P. D.; Jessell, M. W.; Griera, A.; Evans, L. A.; Wilson, C. J. L.
2009-04-01
Ductile strains in shear zones often reach extreme values, resulting in typical structures, such as winged porphyroclasts and several types of shear bands. The numerical simulation of the development of such structures has so far been inhibited by the low maximum strains that numerical models can normally achieve. Typical numerical models collapse at shear strains in the order of one to three. We have implemented a number of new functionalities in the numerical platform "Elle" (Jessell et al. 2001), which significantly increases the amount of strain that can be achieved and simultaneously reduces boundary effects that become increasingly disturbing at higher strain. Constant remeshing, while maintaining the polygonal phase regions, is the first step to avoid collapse of the finite-element grid required by finite-element solvers, such as Basil (Houseman et al. 2008). The second step is to apply a grain-growth routine to the boundaries of polygons that represent phase regions. This way, the development of sharp angles is avoided. A second advantage is that phase regions may merge or become separated (boudinage). Such topological changes are normally not possible in finite element deformation codes. The third step is the use of wrapping vertical model boundaries, with which optimal and unchanging model boundaries are maintained for the application of stress or velocity boundary conditions. The fourth step is to shift the model by a random amount in the vertical direction every time step. This way, the fixed horizontal boundary conditions are applied to different material points within the model every time step. Disturbing boundary effects are thus averaged out over the whole model and not localised to e.g. top and bottom of the model. Reduction of boundary effects has the additional advantage that model can be smaller and, therefore, numerically more efficient. Owing to the combination of these existing and new functionalities it is now possible to simulate the development of very high-strain structures. Jessell, M.W., Bons, P.D., Evans, L., Barr, T., Stüwe, K. 2001. Elle: a micro-process approach to the simulation of microstructures. Computers & Geosciences 27, 17-30. Houseman, G., Barr, T., Evans, L. 2008. Basil: stress and deformation in a viscous material. In: P.D. Bons, D. Koehn & M.W.Jessell (Eds.) Microdynamics Simulation. Lecture Notes in Earth Sciences 106, Springer, Berlin, 405p.
The cervical vertebral maturation method: A user's guide.
McNamara, James A; Franchi, Lorenzo
2018-03-01
The cervical vertebral maturation (CVM) method is used to determine the craniofacial skeletal maturational stage of an individual at a specific time point during the growth process. This diagnostic approach uses data derived from the second (C2), third (C3), and fourth (C4) cervical vertebrae, as visualized in a two-dimensional lateral cephalogram. Six maturational stages of those three cervical vertebrae can be determined, based on the morphology of their bodies. The first step is to evaluate the inferior border of these vertebral bodies, determining whether they are flat or concave (ie, presence of a visible notch). The second step in the analysis is to evaluate the shape of C3 and C4. These vertebral bodies change in shape in a typical sequence, progressing from trapezoidal to rectangular horizontal, to square, and to rectangular vertical. Typically, cervical stages (CSs) 1 and CS 2 are considered prepubertal, CS 3 and CS 4 circumpubertal, and CS 5 and CS 6 postpubertal. Criticism has been rendered as to the reproducibility of the CVM method. Diminished reliability may be observed at least in part due to the lack of a definitive description of the staging procedure in the literature. Based on the now nearly 20 years of experience in staging cervical vertebrae, this article was prepared as a "user's guide" that describes the CVM stages in detail in attempt to help the reader use this approach in everyday clinical practice.
Parallel processing of embossing dies with ultrafast lasers
NASA Astrophysics Data System (ADS)
Jarczynski, Manfred; Mitra, Thomas; Brüning, Stephan; Du, Keming; Jenke, Gerald
2018-02-01
Functionalization of surfaces equips products and components with new features like hydrophilic behavior, adjustable gloss level, light management properties, etc. Small feature sizes demand diffraction-limited spots and adapted fluence for different materials. Through the availability of high power fast repeating ultrashort pulsed lasers and efficient optical processing heads delivering diffraction-limited small spot size of around 10μm it is feasible to achieve fluences higher than an adequate patterning requires. Hence, parallel processing is becoming of interest to increase the throughput and allow mass production of micro machined surfaces. The first step on the roadmap of parallel processing for cylinder embossing dies was realized with an eight- spot processing head based on ns-fiber laser with passive optical beam splitting, individual spot switching by acousto optical modulation and an advanced imaging. Patterning of cylindrical embossing dies shows a high efficiency of nearby 80%, diffraction-limited and equally spaced spots with pitches down to 25μm achieved by a compression using cascaded prism arrays. Due to the nanoseconds laser pulses the ablation shows the typical surrounding material deposition of a hot process. In the next step the processing head was adapted to a picosecond-laser source and the 500W fiber laser was replaced by an ultrashort pulsed laser with 300W, 12ps and a repetition frequency of up to 6MHz. This paper presents details about the processing head design and the analysis of ablation rates and patterns on steel, copper and brass dies. Furthermore, it gives an outlook on scaling the parallel processing head from eight to 16 individually switched beamlets to increase processing throughput and optimized utilization of the available ultrashort pulsed laser energy.
An update on coating/manufacturing techniques of microneedles.
Tarbox, Tamara N; Watts, Alan B; Cui, Zhengrong; Williams, Robert O
2017-12-29
Recently, results have been published for the first successful phase I human clinical trial investigating the use of dissolving polymeric microneedles… Even so, further clinical development represents an important hurdle that remains in the translation of microneedle technology to approved products. Specifically, the potential for accumulation of polymer within the skin upon repeated application of dissolving and coated microneedles, combined with a lack of safety data in humans, predicates a need for further clinical investigation. Polymers are an important consideration for microneedle technology-from both manufacturing and drug delivery perspectives. The use of polymers enables a tunable delivery strategy, but the scalability of conventional manufacturing techniques could arguably benefit from further optimization. Micromolding has been suggested in the literature as a commercially viable means to mass production of both dissolving and swellable microneedles. However, the reliance on master molds, which are commonly manufactured using resource intensive microelectronics industry-derived processes, imparts notable material and design limitations. Further, the inherently multi-step filling and handling processes associated with micromolding are typically batch processes, which can be challenging to scale up. Similarly, conventional microneedle coating processes often follow step-wise batch processing. Recent developments in microneedle coating and manufacturing techniques are highlighted, including micromilling, atomized spraying, inkjet printing, drawing lithography, droplet-born air blowing, electro-drawing, continuous liquid interface production, 3D printing, and polyelectrolyte multilayer coating. This review provides an analysis of papers reporting on potentially scalable production techniques for the coating and manufacturing of microneedles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaSalvia, Vincenzo; Jensen, Mallory Ann; Youssef, Amanda
2016-11-21
We investigate a high temperature, high cooling-rate anneal Tabula Rasa (TR) and report its implications on n-type Czochralski-grown silicon (n-Cz Si) for photovoltaic fabrication. Tabula Rasa aims at dissolving and homogenizing oxygen precipitate nuclei that can grow during the cell process steps and degrade the cell performance due to their high internal gettering and recombination activity. The Tabula Rasa thermal treatment is performed in a clean tube furnace with cooling rates >100 degrees C/s. We characterize the bulk lifetime by Sinton lifetime and photoluminescence mapping just after Tabula Rasa, and after the subsequent cell processing. After TR, the bulk lifetimemore » surprisingly degrades to <; 0.1ms, only to recover to values equal or higher than the initial non-treated wafer (several ms), after typical high temperature cell process steps. Those include boron diffusion and oxidation; phosphorus diffusion/oxidation; ambient annealing at 850 degrees C; and crystallization annealing of tunneling-passivating contacts (doped polycrystalline silicon on 1.5 nm thermal oxide). The drastic lifetime improvement during high temperature cell processing is attributed to improved external gettering of metal impurities and annealing of intrinsic point defects. Time and injection dependent lifetime spectroscopy further reveals the mechanisms of lifetime improvement after Tabula Rasa treatment. Additionally, we report the efficacy of Tabula Rasa on n-type Cz-Si wafers and its dependence on oxygen concentration, correlated to position within the ingot.« less
The wiper model: avalanche dynamics in an exclusion process
NASA Astrophysics Data System (ADS)
Politi, Antonio; Romano, M. Carmen
2013-10-01
The exclusion-process model (Ciandrini et al 2010 Phys. Rev. E 81 051904) describing traffic of particles with internal stepping dynamics reveals the presence of strong correlations in realistic regimes. Here we study such a model in the limit of an infinitely fast translocation time, where the evolution can be interpreted as a ‘wiper’ that moves to dry neighbouring sites. We trace back the existence of long-range correlations to the existence of avalanches, where many sites are dried at once. At variance with self-organized criticality, in the wiper model avalanches have a typical size equal to the logarithm of the lattice size. In the thermodynamic limit, we find that the hydrodynamic behaviour is a mixture of stochastic (diffusive) fluctuations and increasingly coherent periodic oscillations that are reminiscent of a collective dynamics.
Rarefied-flow pitching moment coefficient measurements of the Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Blanchard, R. C.; Hinson, E. W.
1988-01-01
An overview of the process for obtaining the Shuttle Orbiter rarefied-flow pitching moment from flight gyro data is presented. The extraction technique involves differentiation of the output of the pitch gyro after accounting for nonaerodynamic torques, such as those produced by gravity gradient and the Orbiter's auxiliary power unit and adjusting for drift biases. The overview of the extraction technique includes examples of results from each of the steps involved in the process, using the STS-32 mission as a typical sample case. The total pitching moment and moment coefficient (Cm) for that flight are calculated and compared with preflight predictions. The flight results show the anticipated decrease in Cm with increasing altitude. However, the total moment coefficient is less than predicted using preflight estimates.
Coherent diffractive imaging of time-evolving samples with improved temporal resolution
Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...
2016-05-19
Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less
Multi-step EMG Classification Algorithm for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Ren, Peng; Barreto, Armando; Adjouadi, Malek
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
Electrical property of macroscopic graphene composite fibers prepared by chemical vapor deposition
NASA Astrophysics Data System (ADS)
Sun, Haibin; Fu, Can; Gao, Yanli; Guo, Pengfei; Wang, Chunlei; Yang, Wenchao; Wang, Qishang; Zhang, Chongwu; Wang, Junya; Xu, Junqi
2018-07-01
Graphene fibers are promising candidates in portable and wearable electronics due to their tiny volume, flexibility and wearability. Here, we successfully synthesized macroscopic graphene composite fibers via a two-step process, i.e. first electrospinning and then chemical vapor deposition (CVD). Briefly, the well-dispersed PAN nanofibers were sprayed onto the copper surface in an electrified thin liquid jet by electrospinning. Subsequently, CVD growth process induced the formation of graphene films using a PAN-solid source of carbon and a copper catalyst. Finally, crumpled and macroscopic graphene composite fibers were obtained from carbon nanofiber/graphene composite webs by self-assembly process in the deionized water. Temperature-dependent conduct behavior reveals that electron transport of the graphene composite fibers belongs to hopping mechanism and the typical electrical conductivity reaches 4.59 × 103 S m‑1. These results demonstrated that the graphene composite fibers are promising for the next-generation flexible and wearable electronics.
Mathematical Modeling of Decarburization in Levitated Fe-Cr-C Droplets
NASA Astrophysics Data System (ADS)
Gao, Lei; Shi, Zhe; Yang, Yindong; Li, Donghui; Zhang, Guifang; McLean, Alexander; Chattopadhyay, Kinnor
2018-04-01
Using carbon dioxide to replace oxygen as an alternative oxidant gas has proven to be a viable solution in the decarburization process, with potential for industrial applications. In a recent study, the transport phenomena governing the carbon dioxide decarburization process through the use of electromagnetic levitation (EML) was examined. CO2/CO mass transfer was found to be the principal reaction rate control step, as a result gas diffusion has gained significant attention. In the present study, gas diffusion during decarburization process was investigated using computational fluid dynamics (CFD) modeling coupled with chemical reactions. The resulting model was verified through experimental data in a published paper, and employed to provide insights on phenomena typically unobservable through experiments. Based on the results, a new correction of the Frössling equation was presented which better represents the mass transfer phenomena at the metal-gas interface within the range of this research.
Confronting Therapeutic Failure: A Conversation Guide
2015-01-01
We reflect on the impact of bad news on both clinician and patient in the setting of cancer treatment failure. We review the classic six-step SPIKES (setting, perception, invitation for information, knowledge, empathy, summarize and strategize) protocol for giving bad news that has been widely adopted since it was first published in this journal in 2005. The goal of such a conversation guide is to describe both the process and the tasks that constitute vital steps for clinicians and to comment on the emotional impact of the conversation on the clinician. Confronting therapeutic failure is the hardest task for oncologists. We offer practical tips derived from a thorough review of the evidence and our clinical experience. Implications for Practice: Discussing the failure of anticancer therapy remains a very difficult conversation for oncologists and their patients. In this article, the process of confronting this failure is broken down into various components, and practical tips are provided for clinicians following a classic protocol for breaking bad news. Also addressed are the emotions of the oncologist and the reasons why these conversations are typically so hard. These insights are based on solid research intended to deepen the therapeutic connection between physician and patient. PMID:26099747
SVM Pixel Classification on Colour Image Segmentation
NASA Astrophysics Data System (ADS)
Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.
2018-04-01
The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.
Maire, Murielle; Rega, Barbara; Cuvelier, Marie-Elisabeth; Soto, Paola; Giampaoli, Pierre
2013-12-15
This paper investigates the effect of ingredients on the reactions occurring during the making of sponge cake and leading to the generation of volatile compounds related to flavour quality. To obtain systems sensitive to lipid oxidation (LO), a formulation design was applied varying the composition of fatty matter and eggs. Oxidation of polyunsaturated fatty acids (PUFA) and formation of related volatile compounds were followed at the different steps of cake-making. Optimised dynamic Solid Phase Micro Extraction was applied to selectively extract either volatile or semi-volatile compounds directly from the baking vapours. We show for the first time that in the case of alveolar baked products, lipid oxidation occurs very early during the step of dough preparation and to a minor extent during the baking process. The generation of lipid oxidation compounds depends on PUFA content and on the presence of endogenous antioxidants in the raw matter. Egg yolk seemed to play a double role on reactivity: protecting unsaturated lipids from oxidation and being necessary to generate a broad class of compounds of the Maillard reaction during baking and linked to the typical flavour of sponge cake. Copyright © 2013 Elsevier Ltd. All rights reserved.
Independent validation of Swarm Level 2 magnetic field products and `Quick Look' for Level 1b data
NASA Astrophysics Data System (ADS)
Beggan, Ciarán D.; Macmillan, Susan; Hamilton, Brian; Thomson, Alan W. P.
2013-11-01
Magnetic field models are produced on behalf of the European Space Agency (ESA) by an independent scientific consortium known as the Swarm Satellite Constellation Application and Research Facility (SCARF), through the Level 2 Processor (L2PS). The consortium primarily produces magnetic field models for the core, lithosphere, ionosphere and magnetosphere. Typically, for each magnetic product, two magnetic field models are produced in separate chains using complementary data selection and processing techniques. Hence, the magnetic field models from the complementary processing chains will be similar but not identical. The final step in the overall L2PS therefore involves inspection and validation of the magnetic field models against each other and against data from (semi-) independent sources (e.g. ground observatories). We describe the validation steps for each magnetic field product and the comparison against independent datasets, and we show examples of the output of the validation. In addition, the L2PS also produces a daily set of `Quick Look' output graphics and statistics to monitor the overall quality of Level 1b data issued by ESA. We describe the outputs of the `Quick Look' chain.
Measured close lightning leader-step electric-field-derivative waveforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Doug M.; Hill, Dustin; Biagi, Christopher J.
2010-12-01
We characterize the measured electric field-derivative (dE/dt) waveforms of lightning stepped-leader steps from three negative lightning flashes at distances of tens to hundreds of meters. Electromagnetic signatures of leader steps at such close distances have rarely been documented in previous literature. Individual leader-step three-dimensional locations are determined by a dE/dt TOA system. The leader-step field derivative is typically a bipolar pulse with a sharp initial half-cycle of the same polarity as that of the return stroke, followed by an opposite polarity overshoot that decays relatively slowly to background level. This overshoot increases in amplitude relative to the initial peak andmore » becomes dominant as range decreases. The initial peak is often preceded by a 'slow front,' similar to the slow front that precedes the fast transition to peak in first return stroke dE/dt and E waveforms. The overall step-field waveform duration is typically less than 1 {micro}s. The mean initial peak of dE/dt, range-normalized to 100 km, is 7.4 V m{sup -1} {micro}s{sup -1} (standard deviation (S.D.), 3.7 V m{sup -1} {micro}s{sup -1}, N = 103), the mean half-peak width is 33.5 ns (S.D., 11.9 ns, N = 69), and the mean 10-to-90% risetime is 43.6 ns (S.D., 24.2 ns, N = 69). From modeling, we determine the properties of the leader step currents which produced two typical measured field derivatives, and we use one of these currents to calculate predicted leader step E and dE/dt as a function of source range and height, the results being in good agreement with our observations. The two modeled current waveforms had maximum rates of current rise-to-peak near 100 kA {micro}s{sup -1}, peak currents in the 5-7 kA range, current half-peak widths of about 300 ns, and charge transfers of {approx}3 mC. As part of the modeling, those currents were propagated upward at 1.5 x 10{sup 8} m s{sup -1}, with their amplitudes decaying exponentially with a decay height constant of 25 m.« less
Semantic Service Matchmaking in the ATM Domain Considering Infrastructure Capability Constraints
NASA Astrophysics Data System (ADS)
Moser, Thomas; Mordinyi, Richard; Sunindyo, Wikan Danar; Biffl, Stefan
In a service-oriented environment business processes flexibly build on software services provided by systems in a network. A key design challenge is the semantic matchmaking of business processes and software services in two steps: 1. Find for one business process the software services that meet or exceed the BP requirements; 2. Find for all business processes the software services that can be implemented within the capability constraints of the underlying network, which poses a major problem since even for small scenarios the solution space is typically very large. In this chapter we analyze requirements from mission-critical business processes in the Air Traffic Management (ATM) domain and introduce an approach for semi-automatic semantic matchmaking for software services, the “System-Wide Information Sharing” (SWIS) business process integration framework. A tool-supported semantic matchmaking process like SWIS can provide system designers and integrators with a set of promising software service candidates and therefore strongly reduces the human matching effort by focusing on a much smaller space of matchmaking candidates. We evaluate the feasibility of the SWIS approach in an industry use case from the ATM domain.
Early process development of API applied to poorly water-soluble TBID.
Meise, Marius; Niggemann, Matthias; Dunens, Alexandra; Schoenitz, Martin; Kuschnerow, Jan C; Kunick, Conrad; Scholl, Stephan
2018-05-01
Finding and optimising of synthesis processes for active pharmaceutical ingredients (API) is time consuming. In the finding phase, established methods for synthesis, purification and formulation are used to achieve a high purity API for biological studies. For promising API candidates, this is followed by pre-clinical and clinical studies requiring sufficient quantities of the active component. Ideally, these should be produced with a process representative for a later production process and suitable for scaling to production capacity. This work presents an overview of different approaches for process synthesis based on an existing lab protocol. This is demonstrated for the production of the model drug 4,5,6,7-tetrabromo-2-(1H-imidazol-2-yl) isoindolin-1,3-dione (TBID). Early batch synthesis and purification procedures typically suffer from low and fluctuating yields and purities due to poor process control. In a first step the literature synthesis and purification procedure was modified and optimized using solubility measurements, targeting easier and safer processing for consecutive studies. Copyright © 2018 Elsevier B.V. All rights reserved.
Pacholewicz, Ewa; Liakopoulos, Apostolos; Swart, Arno; Gortemaker, Betty; Dierikx, Cindy; Havelaar, Arie; Schmitt, Heike
2015-12-23
Whilst broilers are recognised as a reservoir of extended-spectrum-β-lactamase (ESBL)- and AmpC-β-lactamase (AmpC)-producing Escherichia coli, there is currently limited knowledge on the effect of slaughtering on its concentrations on poultry meat. The aim of this study was to establish the concentration of ESBL/AmpC producing E. coli on broiler chicken carcasses through processing. In addition the changes in ESBL/AmpC producing E. coli concentrations were compared with generic E. coli and Campylobacter. In two slaughterhouses, the surface of the whole carcasses was sampled after 5 processing steps: bleeding, scalding, defeathering, evisceration and chilling. In total, 17 batches were sampled in two different slaughterhouses during the summers of 2012 and 2013. ESBL/AmpC producing E. coli was enumerated on MacConkey agar with 1mg/l cefotaxime, and the ESBL/AmpC phenotypes and genotypes were characterised. The ESBL/AmpC producing E. coli concentrations varied significantly between the incoming batches in both slaughterhouses. The concentrations on broiler chicken carcasses were significantly reduced during processing. In Slaughterhouse 1, all subsequent processing steps reduced the concentrations except evisceration which led to a slight increase that was statistically not significant. The changes in concentration between processing steps were relatively similar for all sampled batches in this slaughterhouse. In contrast, changes varied between batches in Slaughterhouse 2, and the overall reduction through processing was higher in Slaughterhouse 2. Changes in ESBL/AmpC producing E. coli along the processing line were similar to changes in generic E. coli in both slaughterhouses. The effect of defeathering differed between ESBL/AmpC producing E. coli and Campylobacter. ESBL/AmpC producing E. coli decreased after defeathering, whereas Campylobacter concentrations increased. The genotypes of ESBL/AmpC producing E. coli (blaCTX-M-1, blaSHV-12, blaCMY-2, blaTEM-52c, blaTEM-52cvar) from both slaughterhouses match typical poultry genotypes. Their distribution differed between batches and changed throughout processing for some batches. The concentration levels found after chilling were between 10(2) and 10(5)CFU/carcass. To conclude, changes in ESBL/AmpC producing E. coli concentrations on broiler chicken carcasses during processing are influenced by batch and slaughterhouse, pointing to the role of both primary production and process control for reducing ESBL/AmpC producing E. coli levels in final products. Due to similar changes upon processing, E. coli can be used as a process indicator of ESBL/AmpC producing E. coli, because the processing steps had similar impact on both organisms. Cross contamination may potentially explain shifts in genotypes within some batches through the processing. Copyright © 2015 Elsevier B.V. All rights reserved.
Materials experiment carrier concepts definition study. Volume 1: Executive summary, part 2
NASA Technical Reports Server (NTRS)
1981-01-01
The materials experiment carrier (MEC) is an optimized carrier for near term and advanced materials processing in space (MPS) research and commercial payloads. When coupled with the space platform (SP), the MEC can provide the extended duration, high power and low acceleration environment the MPS payload typically requires. The lowest cost, technically reasonable first step MEC that meets the MPS program missions objectives with minimum programmatic risks is defined. The effectiveness of the initial MEC/space platform idea for accommodating high priority, multidiscipline, R&D and commercial MPS payloads, and conducting MPS payload oprations at affordable funding and acceptable productivity levels is demonstrated.
Pebble pile-up and planetesimal formation at the snow line
NASA Astrophysics Data System (ADS)
Drazkowska, J.
2017-09-01
The planetesimal formation stage represents a major gap in our understanding of planet formation process. Because of this, the late-stage planet accretion models typically make arbitrary assumptions about planetesimals and pebbles distribution, while the state-of-the-art dust evolution models predict no or little planetesimal formation. With this contribution, I present a step toward bridging the gap between the early and late stages of planet formation by models that connect dust coagulation and planetesimal formation. With the aid of evaporation, outward diffusion, and re-condensation of water vapor, pile-up of large pebbles is formed outside of the snow line that facilitates planetesimal formation by streaming instability.
Growth of single-layer boron nitride dome-shaped nanostructures catalysed by iron clusters.
Torre, A La; Åhlgren, E H; Fay, M W; Ben Romdhane, F; Skowron, S T; Parmenter, C; Davies, A J; Jouhannaud, J; Pourroy, G; Khlobystov, A N; Brown, P D; Besley, E; Banhart, F
2016-08-11
We report on the growth and formation of single-layer boron nitride dome-shaped nanostructures mediated by small iron clusters located on flakes of hexagonal boron nitride. The nanostructures were synthesized in situ at high temperature inside a transmission electron microscope while the e-beam was blanked. The formation process, typically originating at defective step-edges on the boron nitride support, was investigated using a combination of transmission electron microscopy, electron energy loss spectroscopy and computational modelling. Computational modelling showed that the domes exhibit a nanotube-like structure with flat circular caps and that their stability was comparable to that of a single boron nitride layer.
Coulomb explosion: a novel approach to separate single-walled carbon nanotubes from their bundle.
Liu, Guangtong; Zhao, Yuanchun; Zheng, Kaihong; Liu, Zheng; Ma, Wenjun; Ren, Yan; Xie, Sishen; Sun, Lianfeng
2009-01-01
A novel approach based on Coulomb explosion has been developed to separate single-walled carbon nanotubes (SWNTs) from their bundle. With this technique, we can readily separate a bundle of SWNTs into smaller bundles with uniform diameter as well as some individual SWNTs. The separated SWNTs have a typical length of several microns and form a nanotree at one end of the original bundle. More importantly, this separating procedure involves no surfactant and includes only one-step physical process. The separation method offers great conveniences for the subsequent individual SWNT or multiterminal SWNTs device fabrication and their physical properties studies.
Analyzing angular distributions for two-step dissociation mechanisms in velocity map imaging.
Straus, Daniel B; Butler, Lynne M; Alligood, Bridget W; Butler, Laurie J
2013-08-15
Increasingly, velocity map imaging is becoming the method of choice to study photoinduced molecular dissociation processes. This paper introduces an algorithm to analyze the measured net speed, P(vnet), and angular, β(vnet), distributions of the products from a two-step dissociation mechanism, where the first step but not the second is induced by absorption of linearly polarized laser light. Typically, this might be the photodissociation of a C-X bond (X = halogen or other atom) to produce an atom and a momentum-matched radical that has enough internal energy to subsequently dissociate (without the absorption of an additional photon). It is this second step, the dissociation of the unstable radicals, that one wishes to study, but the measured net velocity of the final products is the vector sum of the velocity imparted to the radical in the primary photodissociation (which is determined by taking data on the momentum-matched atomic cophotofragment) and the additional velocity vector imparted in the subsequent dissociation of the unstable radical. The algorithm allows one to determine, from the forward-convolution fitting of the net velocity distribution, the distribution of velocity vectors imparted in the second step of the mechanism. One can thus deduce the secondary velocity distribution, characterized by a speed distribution P(v1,2°) and an angular distribution I(θ2°), where θ2° is the angle between the dissociating radical's velocity vector and the additional velocity vector imparted to the product detected from the subsequent dissociation of the radical.
A computational kinetic model of diffusion for molecular systems.
Teo, Ivan; Schulten, Klaus
2013-09-28
Regulation of biomolecular transport in cells involves intra-protein steps like gating and passage through channels, but these steps are preceded by extra-protein steps, namely, diffusive approach and admittance of solutes. The extra-protein steps develop over a 10-100 nm length scale typically in a highly particular environment, characterized through the protein's geometry, surrounding electrostatic field, and location. In order to account for solute energetics and mobility of solutes in this environment at a relevant resolution, we propose a particle-based kinetic model of diffusion based on a Markov State Model framework. Prerequisite input data consist of diffusion coefficient and potential of mean force maps generated from extensive molecular dynamics simulations of proteins and their environment that sample multi-nanosecond durations. The suggested diffusion model can describe transport processes beyond microsecond duration, relevant for biological function and beyond the realm of molecular dynamics simulation. For this purpose the systems are represented by a discrete set of states specified by the positions, volumes, and surface elements of Voronoi grid cells distributed according to a density function resolving the often intricate relevant diffusion space. Validation tests carried out for generic diffusion spaces show that the model and the associated Brownian motion algorithm are viable over a large range of parameter values such as time step, diffusion coefficient, and grid density. A concrete application of the method is demonstrated for ion diffusion around and through the Eschericia coli mechanosensitive channel of small conductance ecMscS.
NASA Astrophysics Data System (ADS)
Ming, Bin
Josephson junctions are at the heart of any superconductor device applications. A SQUID (Superconducting Quantum Interference Device), which consists of two Josephson junctions, is by far the most important example. Unfortunately, in the case of high-Tc superconductors (HTS), the quest for a robust, flexible, and high performance junction technology is yet far from the end. Currently, the only proven method to make HTS junctions is the SrTiO3(STO)-based bicrystal technology. In this thesis we concentrate on the fabrication of YBCO step-edge junctions and SQUIDs on sapphire. The step-edge method provides complete control of device locations and facilitates sophisticated, high-density layout. We select CeO2 as the buffer layer, as the key step to make device quality YBCO thin films on sapphire. With an "overhang" shadow mask produced by a novel photolithography technique, a steep step edge was fabricated on the CeO2 buffer layer by Ar+ ion milling with optimized parameters for minimum ion beam divergence. The step angle was determined to be in excess of 80° by atomic force microscopy (AFM). Josephson junctions patterned from those step edges exhibited resistively shunted junction (RSJ) like current-voltage characteristics. IcR n values in the 200--500 mV range were measured at 77K. Shapiro steps were observed under microwave irradiation, reflecting the true Josephson nature of those junctions. The magnetic field dependence of the junction Ic indicates a uniform current distribution. These results suggest that all fabrication processes are well controlled and the step edge is relatively straight and free of microstructural defects. The SQUIDs made from the same process exhibit large voltage modulation in a varying magnetic field. At 77K, our sapphire-based step-edge SQUID has a low white noise level at 3muphi0/ Hz , as compared to typically >10muphi0/ Hz from the best bicrystal STO SQUIDS. Our effort at device fabrication is chiefly motivated by the scanning SQUID microscopy (SSM) application. A scanning SQUID microscope is a non-contact, non-destructive imaging tool that can resolve weak currents beneath the sample surface by detecting their magnetic fields. Our low-noise sapphire-based step-edge SQUIDs should be particularly suitable for such an application. An earlier effort to make SNS trench junctions using focused ion beam (FIB) is reviewed in a separate chapter. (Abstract shortened by UMI.)
A simplified approach to construct infectious cDNA clones of a tobamovirus in a binary vector.
Junqueira, Bruna Rayane Teodoro; Nicolini, Cícero; Lucinda, Natalia; Orílio, Anelise Franco; Nagata, Tatsuya
2014-03-01
Infectious cDNA clones of RNA viruses are important tools to study molecular processes such as replication and host-virus interactions. However, the cloning steps necessary for construction of cDNAs of viral RNA genomes in binary vectors are generally laborious. In this study, a simplified method of producing an agro-infectious Pepper mild mottle virus (PMMoV) clone is described in detail. Initially, the complete genome of PMMoV was amplified by a single-step RT-PCR, cloned, and subcloned into a small plasmid vector under the T7 RNA polymerase promoter to confirm the infectivity of the cDNA clone through transcript inoculation. The complete genome was then transferred to a binary vector using a single-step, overlap-extension PCR. The selected clones were agro-infiltrated to Nicotiana benthamiana plants and showed to be infectious, causing typical PMMoV symptoms. No differences in host responses were observed when the wild-type PMMoV isolate, the T7 RNA polymerase-derived transcripts and the agroinfiltration-derived viruses were inoculated to N. benthamiana, Capsicum chinense PI 159236 and Capsicum annuum plants. Copyright © 2013 Elsevier B.V. All rights reserved.
GPS Attitude Determination Using Deployable-Mounted Antennas
NASA Technical Reports Server (NTRS)
Osborne, Michael L.; Tolson, Robert H.
1996-01-01
The primary objective of this investigation is to develop a method to solve for spacecraft attitude in the presence of potential incomplete antenna deployment. Most research on the use of the Global Positioning System (GPS) in attitude determination has assumed that the antenna baselines are known to less than 5 centimeters, or one quarter of the GPS signal wavelength. However, if the GPS antennas are mounted on a deployable fixture such as a solar panel, the actual antenna positions will not necessarily be within 5 cm of nominal. Incomplete antenna deployment could cause the baselines to be grossly in error, perhaps by as much as a meter. Overcoming this large uncertainty in order to accurately determine attitude is the focus of this study. To this end, a two-step solution method is proposed. The first step uses a least-squares estimate of the baselines to geometrically calculate the deployment angle errors of the solar panels. For the spacecraft under investigation, the first step determines the baselines to 3-4 cm with 4-8 minutes of data. A Kalman filter is then used to complete the attitude determination process, resulting in typical attitude errors of 0.50.
Evolution of female-specific wingless forms in bagworm moths.
Niitsu, Shuhei; Sugawara, Hirotaka; Hayashi, Fumio
2017-01-01
The evolution of winglessness in insects has been typically interpreted as a consequence of developmental and other adaptations to various environments that are secondarily derived from a winged morph. Several species of bagworm moths (Insecta: Lepidoptera, Psychidae) exhibit a case-dwelling larval life style along with one of the most extreme cases of sexual dimorphism: wingless female adults. While the developmental process that led to these wingless females is well known, the origins and evolutionary transitions are not yet understood. To examine the evolutionary patterns of wing reduction in bagworm females, we reconstruct the molecular phylogeny of over 30 Asian species based on both mitochondrial (cytochrome c oxidase subunit I) and nuclear (28S rRNA) DNA sequences. Under a parsimonious assumption, the molecular phylogeny implies that: (i) the evolutionary wing reduction towards wingless females consisted of two steps: (Step I) from functional wings to vestigial wings (nonfunctional) and (Step II) from vestigial wings to the most specialized vermiform adults (lacking wings and legs); and (ii) vermiform morphs evolved independently at least twice. Based on the results of our study, we suggest that the evolutionary changes in the developmental system are essential for the establishment of different wingless forms in insects. © 2016 Wiley Periodicals, Inc.
Standard work for room entry: Linking lean, hand hygiene, and patient-centeredness.
O'Reilly, Kristin; Ruokis, Samantha; Russell, Kristin; Teves, Tim; DiLibero, Justin; Yassa, David; Berry, Hannah; Howell, Michael D
2016-03-01
Healthcare-associated infections are costly and fatal. Substantial front-line, administrative, regulatory, and research efforts have focused on improving hand hygiene. While broad agreement exists that hand hygiene is the most important single approach to infection prevention, compliance with hand hygiene is typically only about 40%(1). Our aim was to develop a standard process for room entry in the intensive care unit that improved compliance with hand hygiene and allowed for maximum efficiency. We recognized that hand hygiene is a single step in a substantially more complicated process of room entry. We applied Lean engineering techniques to develop a standard process that included both physical steps and also standard communication elements from provider to patients and families and created a physical environment to support this. We observed meaningful improvement in the performance of the new standard as well as time savings for clinical providers with each room entry. We also observed an increase in room entries that included verbal communication and an explanation of what the clinician was entering the room to do. The design and implementation of a standardized room entry process and the creation of an environment that supports that new process has resulted in measurable positive outcomes on the medical intensive care unit, including quality, patient experience, efficiency, and staff satisfaction. Designing a process, rather than viewing tasks that need to happen in close proximity in time (either serially or in parallel) as unrelated, simplifies work for staff and results in higher compliance to individual tasks. Copyright © 2015 Elsevier Inc. All rights reserved.
Ma, Chengying; Li, Junxing; Chen, Wei; Wang, Wenwen; Qi, Dandan; Pang, Shi; Miao, Aiqing
2018-06-01
Oolong tea is a typical semi-fermented tea and is famous for its unique aroma. The aim of this study was to compare the volatile compounds during manufacturing process to reveal the formation of aroma. In this paper, a method was developed based on head-space solid phase microextraction/gas chromatography-mass spectrometry (HS-SPME/GC-MS) combined with chemometrics to assess volatile profiles during manufacturing process (fresh leaves, sun-withered leaves, rocked leaves and leaves after de-enzyming). A total of 24 aroma compounds showing significant differences during manufacturing process were identified. Subsequently, according to these aroma compounds, principal component analysis and hierarchical cluster analysis showed that the four samples were clearly distinguished from each other, which suggested that the 24 identified volatile compounds can represent the changes of volatile compounds during the four steps. Additionally, sun-withering, rocking and de-enzyming can influence the variations of volatile compounds in different degree, and we found the changes of volatile compounds in withering step were less than other two manufacturing process, indicating that the characteristic volatile compounds of oolong tea might be mainly formed in rocking stage by biological reactions and de-enzyming stage through thermal chemical transformations rather than withering stage. This study suggested that HS-SPME/GC-MS combined with chemometrics methods is accurate, sensitive, fast and ideal for rapid routine analysis of the aroma compounds changes in oolong teas during manufacturing processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shahriar, Bakrani Balani; Arthur, Cantarel; France, Chabert; Valérie, Nassiet
2018-05-01
Extrusion is one of the oldest manufacturing processes; it is widely used for manufacturing finished and semi-finished products. Moreover, extrusion is also the main process in additive manufacturing technologies such as Fused Filament Fabrication (FFF). In FFF process, the parts are manufactured layer by layer using thermoplastic material. The latter in form of filament, is melted in the liquefier and then it is extruded and deposited on the previous layer. The mechanical properties of the printed parts rely on the coalescence of each extrudate with another one. The coalescence phenomenon is driven by the flow properties of the melted polymer when it comes out the nozzle just before the deposition step. This study aims to master the quality of the printed parts by controlling the effect of the parameters of the extruder on the flow properties in the FFF process. In the current study, numerical simulation of the polymer coming out of the extruder was carried out using Computational Fluid Dynamics (CFD) and two phase flow (TPF) simulation Level Set (LS) method by 2D axisymmetric module of COMSOL Multiphysics software. In order to pair the heat transfer with the flow simulation, an advection-diffusion equation was used. Advection-diffusion equation was implemented as a Partial Differential Equation (PDE) in the software. In order to define the variation of viscosity of the polymer with temperature, the rheological behaviors of two thermoplastics were measured by extensional rheometer and using a parallel-plate configuration of an oscillatory rheometer. The results highlight the influence of the environment temperature and the cooling rate on the temperature and viscosity of the extrudate exiting from the nozzle. Moreover, the temperature and its corresponding viscosity at different times have been determined using numerical simulation. At highest shear rates, the extrudate undergoes deformation from typical cylindrical shape. These results are required to predict the coalescence of filaments, a step towards understanding the mechanical properties of the printed parts.
Defect reduction of high-density full-field patterns in jet and flash imprint lithography
NASA Astrophysics Data System (ADS)
Singh, Lovejeet; Luo, Kang; Ye, Zhengmao; Xu, Frank; Haase, Gaddi; Curran, David; LaBrake, Dwayne; Resnick, Douglas; Sreenivasan, S. V.
2011-04-01
Imprint lithography has been shown to be an effective technique for replication of nano-scale features. Jet and Flash Imprint Lithography (J-FIL) involves the field-by-field deposition and exposure of a low viscosity resist deposited by jetting technology onto the substrate. The patterned mask is lowered into the fluid which then quickly flows into the relief patterns in the mask by capillary action. Following this filling step, the resist is crosslinked under UV radiation, and then the mask is removed leaving a patterned resist on the substrate. Acceptance of imprint lithography for manufacturing will require demonstration that it can attain defect levels commensurate with the defect specifications of high end memory devices. Typical defectivity targets are on the order of 0.10/cm2. This work summarizes the results of defect inspections focusing on two key defect types; random non-fill defects occurring during the resist filling process and repeater defects caused by interactions with particles on the substrate. Non-fill defectivity must always be considered within the context of process throughput. The key limiting throughput step in an imprint process is resist filling time. As a result, it is critical to characterize the filling process by measuring non-fill defectivity as a function of fill time. Repeater defects typically have two main sources; mask defects and particle related defects. Previous studies have indicated that soft particles tend to cause non-repeating defects. Hard particles, on the other hand, can cause either resist plugging or mask damage. In this work, an Imprio 500 twenty wafer per hour (wph) development tool was used to study both defect types. By carefully controlling the volume of inkjetted resist, optimizing the drop pattern and controlling the resist fluid front during spreading, fill times of 1.5 seconds were achieved with non-fill defect levels of approximately 1.2/cm2. Longevity runs were used to study repeater defects and a nickel contamination was identified as the key source of particle induced repeater defects.
Interference lithography for optical devices and coatings
NASA Astrophysics Data System (ADS)
Juhl, Abigail Therese
Interference lithography can create large-area, defect-free nanostructures with unique optical properties. In this thesis, interference lithography will be utilized to create photonic crystals for functional devices or coatings. For instance, typical lithographic processing techniques were used to create 1, 2 and 3 dimensional photonic crystals in SU8 photoresist. These structures were in-filled with birefringent liquid crystal to make active devices, and the orientation of the liquid crystal directors within the SU8 matrix was studied. Most of this thesis will be focused on utilizing polymerization induced phase separation as a single-step method for fabrication by interference lithography. For example, layered polymer/nanoparticle composites have been created through the one-step two-beam interference lithographic exposure of a dispersion of 25 and 50 nm silica particles within a photopolymerizable mixture at a wavelength of 532 nm. In the areas of constructive interference, the monomer begins to polymerize via a free-radical process and concurrently the nanoparticles move into the regions of destructive interference. The holographic exposure of the particles within the monomer resin offers a single-step method to anisotropically structure the nanoconstituents within a composite. A one-step holographic exposure was also used to fabricate self-healing coatings that use water from the environment to catalyze polymerization. Polymerization induced phase separation was used to sequester an isocyanate monomer within an acrylate matrix. Due to the periodic modulation of the index of refraction between the monomer and polymer, the coating can reflect a desired wavelength, allowing for tunable coloration. When the coating is scratched, polymerization of the liquid isocyanate is catalyzed by moisture in air; if the indices of the two polymers are matched, the coatings turn transparent after healing. Interference lithography offers a method of creating multifunctional self-healing coatings that readout when damage has occurred.
Hu, Chong; Lin, Sheng; Li, Wanbo; Sun, Han; Chen, Yangfan; Chan, Chiu-Wing; Leung, Chung-Hang; Ma, Dik-Lung; Wu, Hongkai; Ren, Kangning
2016-10-05
An ultra-fast, extremely cost-effective, and environmentally friendly method was developed for fabricating flexible microfluidic chips with plastic membranes. With this method, we could fabricate plastic microfluidic chips rapidly (within 12 seconds per piece) at an extremely low cost (less than $0.02 per piece). We used a heated perfluoropolymer perfluoroalkoxy (often called Teflon PFA) solid stamp to press a pile of two pieces of plastic membranes, low density polyethylene (LDPE) and polyethylene terephthalate (PET) coated with an ethylene-vinyl acetate copolymer (EVA). During the short period of contact with the heated PFA stamp, the pressed area of the membranes permanently bonded, while the LDPE membrane spontaneously rose up at the area not pressed, forming microchannels automatically. These two regions were clearly distinguishable even at the micrometer scale so we were able to fabricate microchannels with widths down to 50 microns. This method combines the two steps in the conventional strategy for microchannel fabrication, generating microchannels and sealing channels, into a single step. The production is a green process without using any solvent or generating any waste. Also, the chips showed good resistance against the absorption of Rhodamine 6G, oligonucleotides, and green fluorescent protein (GFP). We demonstrated some typical microfluidic manipulations with the flexible plastic membrane chips, including droplet formation, on-chip capillary electrophoresis, and peristaltic pumping for quantitative injection of samples and reagents. In addition, we demonstrated convenient on-chip detection of lead ions in water samples by a peristaltic-pumping design, as an example of the application of the plastic membrane chips in a resource-limited environment. Due to the high speed and low cost of the fabrication process, this single-step method will facilitate the mass production of microfluidic chips and commercialization of microfluidic technologies.
Biesiada, Grażyna; Czepiel, Jacek; Leśniak, Maciej R; Garlicki, Aleksander; Mach, Tomasz
2012-12-20
Lyme disease is a multi-organ animal-borne disease, caused by spirochetes of Borrelia burgdorferi (Bb), which typically affect the skin, nervous system, musculoskeletal system and heart. A history of confirmed exposure to tick bites, typical signs and symptoms of Lyme borreliosis and positive tests for anti-Bb antibodies, are the basis of a diagnosis. A two-step diagnosis is necessary: the first step is based on a high sensitivity ELISA test with positive results confirmed by a more specific Western blot assay. Antibiotic therapy is curative in most cases, but some patients develop chronic symptoms, which do not respond to antibiotics. The aim of this review is to summarize our current knowledge of the symptoms, clinical diagnosis and treatment of Lyme borreliosis.
Sol-Gel Process for Making Pt-Ru Fuel-Cell Catalysts
NASA Technical Reports Server (NTRS)
Narayanan, Sekharipuram; Valdez, Thomas; Kumta, Prashant; Kim, Y.
2005-01-01
A sol-gel process has been developed as a superior alternative to a prior process for making platinum-ruthenium alloy catalysts for electro-oxidation of methanol in fuel cells. The starting materials in the prior process are chloride salts of platinum and ruthenium. The process involves multiple steps, is time-consuming, and yields a Pt-Ru product that has relatively low specific surface area and contains some chloride residue. Low specific surface area translates to incomplete utilization of the catalytic activity that might otherwise be available, while chloride residue further reduces catalytic activity ("poisons" the catalyst). In contrast, the sol-gel process involves fewer steps and less time, does not leave chloride residue, and yields a product of greater specific area and, hence, greater catalytic activity. In this sol-gel process (see figure), the starting materials are platinum(II) acetylacetonate [Pt(C5H7O2)2, also denoted Pt-acac] and ruthenium(III) acetylacetonate [Ru(C5H7O2)3, also denoted Ru-acac]. First, Pt-acac and Ru-acac are dissolved in acetone at the desired concentrations (typically, 0.00338 moles of each salt per 100 mL of acetone) at a temperature of 50 C. A solution of 25 percent tetramethylammonium hydroxide [(CH3)4NOH, also denoted TMAH] in methanol is added to the Pt-acac/Ruacac/ acetone solution to act as a high-molecular-weight hydrolyzing agent. The addition of the TMAH counteracts the undesired tendency of Pt-acac and Ru-acac to precipitate as separate phases during the subsequent evaporation of the solvent, thereby helping to yield a desired homogeneous amorphous gel. The solution is stirred for 10 minutes, then the solvent is evaporated until the solution becomes viscous, eventually transforming into a gel. The viscous gel is dried in air at a temperature of 170 C for about 10 hours. The dried gel is crushed to make a powder that is the immediate precursor of the final catalytic product. The precursor powder is converted to the final product in a controlled-atmosphere heat treatment. Desirably, the final product is a phase-pure (Pt phase only) Pt-Ru powder with a high specific surface area. The conditions of the controlled- atmosphere heat are critical for obtaining the aforementioned desired properties. A typical heat treatment that yields best results for a catalytic alloy of equimolar amounts of Pt and Ru consists of at least two cycles of heating to a temperature of 300 C and holding at 300 C for several hours, all carried out in an atmosphere of 1 percent O2 and 99 percent N2. The resulting powder consists of crystallites with typical linear dimensions of <10 nm. Tests have shown that the powder is highly effective in catalyzing the electro-oxidation of methanol.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Ward, Pamela Denise Peardon; Stevenson, Joel O'Don
2002-01-01
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discernible portion thereof (e.g., a plasma step of a multiple step plasma recipe). Another aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system. A final aspect of the present invention relates to a network a plurality of plasma monitoring systems, including with remote capabilities (i.e., outside of the clean room).
Method of making gas diffusion layers for electrochemical cells
Frisk, Joseph William; Boand, Wayne Meredith; Larson, James Michael
2002-01-01
A method is provided for making a gas diffusion layer for an electrochemical cell comprising the steps of: a) combining carbon particles and one or more surfactants in a typically aqueous vehicle to make a preliminary composition, typically by high shear mixing; b) adding one or more highly fluorinated polymers to said preliminary composition by low shear mixing to make a coating composition; and c) applying the coating composition to an electrically conductive porous substrate, typically by a low shear coating method.
Patterning technology for solution-processed organic crystal field-effect transistors
Li, Yun; Sun, Huabin; Shi, Yi; Tsukagoshi, Kazuhito
2014-01-01
Organic field-effect transistors (OFETs) are fundamental building blocks for various state-of-the-art electronic devices. Solution-processed organic crystals are appreciable materials for these applications because they facilitate large-scale, low-cost fabrication of devices with high performance. Patterning organic crystal transistors into well-defined geometric features is necessary to develop these crystals into practical semiconductors. This review provides an update on recentdevelopment in patterning technology for solution-processed organic crystals and their applications in field-effect transistors. Typical demonstrations are discussed and examined. In particular, our latest research progress on the spin-coating technique from mixture solutions is presented as a promising method to efficiently produce large organic semiconducting crystals on various substrates for high-performance OFETs. This solution-based process also has other excellent advantages, such as phase separation for self-assembled interfaces via one-step spin-coating, self-flattening of rough interfaces, and in situ purification that eliminates the impurity influences. Furthermore, recommendations for future perspectives are presented, and key issues for further development are discussed. PMID:27877656
Su, Ting; Hong, Kwon Ho; Zhang, Wannian; Li, Fei; Li, Qiang; Yu, Fang; Luo, Genxiang; Gao, Honghe; He, Yu-Peng
2017-06-07
A series of phthalic acid derivatives (P) with a carbon-chain tail was designed and synthesized as single-component gelators. A combination of the single-component gelator P and a non-gelling additive n-alkylamine A through acid-base interaction brought about a series of novel phase-selective two-component gelators PA. The gelation capabilities of P and PA, and the structural, morphological, thermo-dynamic and rheological properties of the corresponding gels were investigated. A molecular dynamics simulation showed that the H-bonding network in PA formed between the NH of A and the carbonyl oxygen of P altered the assembly process of gelator P. Crude PA could be synthesized through a one-step process without any purification and could selectively gel the oil phase without a typical heating-cooling process. Moreover, such a crude PA and its gelation process could be amplified to the kilogram scale with high efficiency, which offers a practical economically viable solution to marine oil-spill recovery.
The Scientific Method and Scientific Inquiry: Tensions in Teaching and Learning
ERIC Educational Resources Information Center
Tang, Xiaowei; Coffey, Janet E.; Elby, Andy; Levin, Daniel M.
2010-01-01
Typically, the scientific method in science classrooms takes the form of discrete, ordered steps meant to guide students' inquiry. In this paper, we examine how focusing on the scientific method as discrete steps affects students' inquiry and teachers' perceptions thereof. To do so, we study a ninth-grade environmental science class in which…
Composting. Sludge Treatment and Disposal Course #166. Instructor's Guide [and] Student Workbook.
ERIC Educational Resources Information Center
Arasmith, E. E.
Composting is a lesson developed for a sludge treatment and disposal course. The lesson discusses the basic theory of composting and the basic operation, in a step-by-step sequence, of the two typical composting procedures: windrow and forced air static pile. The lesson then covers basic monitoring and operational procedures. The instructor's…
CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Rolf; Paget, Maria L.; Richman, Eric E.
2011-03-31
With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for allmore » equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.« less
Building high-quality assay libraries for targeted analysis of SWATH MS data.
Schubert, Olga T; Gillet, Ludovic C; Collins, Ben C; Navarro, Pedro; Rosenberger, George; Wolski, Witold E; Lam, Henry; Amodei, Dario; Mallick, Parag; MacLean, Brendan; Aebersold, Ruedi
2015-03-01
Targeted proteomics by selected/multiple reaction monitoring (S/MRM) or, on a larger scale, by SWATH (sequential window acquisition of all theoretical spectra) MS (mass spectrometry) typically relies on spectral reference libraries for peptide identification. Quality and coverage of these libraries are therefore of crucial importance for the performance of the methods. Here we present a detailed protocol that has been successfully used to build high-quality, extensive reference libraries supporting targeted proteomics by SWATH MS. We describe each step of the process, including data acquisition by discovery proteomics, assertion of peptide-spectrum matches (PSMs), generation of consensus spectra and compilation of MS coordinates that uniquely define each targeted peptide. Crucial steps such as false discovery rate (FDR) control, retention time normalization and handling of post-translationally modified peptides are detailed. Finally, we show how to use the library to extract SWATH data with the open-source software Skyline. The protocol takes 2-3 d to complete, depending on the extent of the library and the computational resources available.
An atomistic simulation scheme for modeling crystal formation from solution.
Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk
2006-01-14
We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.
Patiño, Yolanda; Mantecón, Laura G; Polo, Sara; Faba, Laura; Díaz, Eva; Ordóñez, Salvador
2018-01-01
Secondary sludge from municipal wastewater treatment plant is proposed as a promising alternative lipid feedstock for biodiesel production. A deep study combining different type of raw materials (sludge coming from the oxic, anoxic and anaerobic steps of the biological treatment) with different technologies (liquid-liquid and solid-liquid extractions followed by acid catalysed transesterification and in situ extraction-transesterification procedure) allows a complete comparison of available technologies. Different parameters - contact time, catalyst concentration, pretreatments - were considered, obtaining more than 17% FAMEs yield after 50min of sonication with the in situ procedure and 5% of H 2 SO 4 . This result corresponds to an increment of more than 65% respect to the best results reported at typical conditions. Experimental data were used to propose a mathematical model for this process, demonstrating that the mass transfer of lipids from the sludge to the liquid is the limiting step. Copyright © 2017 Elsevier Ltd. All rights reserved.
Translation between representation languages
NASA Technical Reports Server (NTRS)
Vanbaalen, Jeffrey
1994-01-01
A capability for translating between representation languages is critical for effective knowledge base reuse. A translation technology for knowledge representation languages based on the use of an interlingua for communicating knowledge is described. The interlingua-based translation process consists of three major steps: translation from the source language into a subset of the interlingua, translation between subsets of the interlingua, and translation from a subset of the interlingua into the target language. The first translation step into the interlingua can typically be specified in the form of a grammar that describes how each top-level form in the source language translates into the interlingua. In cases where the source language does not have a declarative semantics, such a grammar is also a specification of a declarative semantics for the language. A methodology for building translators that is currently under development is described. A 'translator shell' based on this methodology is also under development. The shell has been used to build translators for multiple representation languages and those translators have successfully translated nontrivial knowledge bases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Dong; Liu, Dan; Harris, Joshua B.
The mechanism of the sulfur cathode in Li-S batteries has been proposed. It was revealed by the real-time quantitative determination of polysulfide species and elemental sulfur by means of the high performance liquid chromatography in the course of the discharge and recharge of a Li-S battery. A three-step reduction mechanism including two chemical equilibrium reactions was proposed for the sulfur cathode discharge. The typical two-plateau discharge curve for sulfur cathode can be explained. A two-step oxidation mechanism for the Li 2S and Li 2S 2 with a single chemical equilibrium among soluble polysulfide ions was proposed. In conclusion, the chemicalmore » equilibrium among S 5 2-, S 6 2-, S 7 2- and S 8 2- throughout the entire oxidation process resulted for the single flat recharge curve in Li-S batteries.« less
Compound image segmentation of published biomedical figures.
Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit
2018-04-01
Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.
Development of Level 2 Calibration and Validation Plans for GOES-R; What is a RIMP?
NASA Technical Reports Server (NTRS)
Kopp, Thomas J.; Belsma, Leslie O.; Mollner, Andrew K.; Sun, Ziping; Deluccia, Frank
2017-01-01
Calibration and Validation (CalVal) plans for Geostationary Operational Environmental Satellite version R (GOES-R) Level 2 (L2) products were documented via Resource, Implementation, and Management Plans (RIMPs) for all of the official L2 products required from the GOES-R Advanced Baseline Imager (ABI). In 2015 the GOES-R program decided to replace the typical CalVal plans with RIMPs that covered, for a given L2 product, what was required from that product, how it would be validated, and what tools would be used to do so. Similar to Level 1b products, the intent was to cover the full spectrum of planning required for the CalVal of L2 ABI products. Instead of focusing on step-by-step procedures, the RIMPs concentrated on the criteria for each stage of the validation process (Beta, Provisional, and Full Validation) and the many elements required to prove when each stage was reached.
Zheng, Dong; Liu, Dan; Harris, Joshua B.; ...
2016-09-09
The mechanism of the sulfur cathode in Li-S batteries has been proposed. It was revealed by the real-time quantitative determination of polysulfide species and elemental sulfur by means of the high performance liquid chromatography in the course of the discharge and recharge of a Li-S battery. A three-step reduction mechanism including two chemical equilibrium reactions was proposed for the sulfur cathode discharge. The typical two-plateau discharge curve for sulfur cathode can be explained. A two-step oxidation mechanism for the Li 2S and Li 2S 2 with a single chemical equilibrium among soluble polysulfide ions was proposed. In conclusion, the chemicalmore » equilibrium among S 5 2-, S 6 2-, S 7 2- and S 8 2- throughout the entire oxidation process resulted for the single flat recharge curve in Li-S batteries.« less
Sensation-to-cognition cortical streams in attention-deficit/hyperactivity disorder.
Carmona, Susana; Hoekzema, Elseline; Castellanos, Francisco X; García-García, David; Lage-Castellanos, Agustín; Van Dijk, Koene R A; Navas-Sánchez, Francisco J; Martínez, Kenia; Desco, Manuel; Sepulcre, Jorge
2015-07-01
We sought to determine whether functional connectivity streams that link sensory, attentional, and higher-order cognitive circuits are atypical in attention-deficit/hyperactivity disorder (ADHD). We applied a graph-theory method to the resting-state functional magnetic resonance imaging data of 120 children with ADHD and 120 age-matched typically developing children (TDC). Starting in unimodal primary cortex-visual, auditory, and somatosensory-we used stepwise functional connectivity to calculate functional connectivity paths at discrete numbers of relay stations (or link-step distances). First, we characterized the functional connectivity streams that link sensory, attentional, and higher-order cognitive circuits in TDC and found that systems do not reach the level of integration achieved by adults. Second, we searched for stepwise functional connectivity differences between children with ADHD and TDC. We found that, at the initial steps of sensory functional connectivity streams, patients display significant enhancements of connectivity degree within neighboring areas of primary cortex, while connectivity to attention-regulatory areas is reduced. Third, at subsequent link-step distances from primary sensory cortex, children with ADHD show decreased connectivity to executive processing areas and increased degree of connections to default mode regions. Fourth, in examining medication histories in children with ADHD, we found that children medicated with psychostimulants present functional connectivity streams with higher degree of connectivity to regions subserving attentional and executive processes compared to medication-naïve children. We conclude that predominance of local sensory processing and lesser influx of information to attentional and executive regions may reduce the ability to organize and control the balance between external and internal sources of information in ADHD. © 2015 Wiley Periodicals, Inc.
Non-rigid CT/CBCT to CBCT registration for online external beam radiotherapy guidance
NASA Astrophysics Data System (ADS)
Zachiu, Cornel; de Senneville, Baudouin Denis; Tijssen, Rob H. N.; Kotte, Alexis N. T. J.; Houweling, Antonetta C.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; Moonen, Chrit T. W.; Ries, Mario
2018-01-01
Image-guided external beam radiotherapy (EBRT) allows radiation dose deposition with a high degree of accuracy and precision. Guidance is usually achieved by estimating the displacements, via image registration, between cone beam computed tomography (CBCT) and computed tomography (CT) images acquired at different stages of the therapy. The resulting displacements are then used to reposition the patient such that the location of the tumor at the time of treatment matches its position during planning. Moreover, ongoing research aims to use CBCT-CT image registration for online plan adaptation. However, CBCT images are usually acquired using a small number of x-ray projections and/or low beam intensities. This often leads to the images being subject to low contrast, low signal-to-noise ratio and artifacts, which ends-up hampering the image registration process. Previous studies addressed this by integrating additional image processing steps into the registration procedure. However, these steps are usually designed for particular image acquisition schemes, therefore limiting their use on a case-by-case basis. In the current study we address CT to CBCT and CBCT to CBCT registration by the means of the recently proposed EVolution registration algorithm. Contrary to previous approaches, EVolution does not require the integration of additional image processing steps in the registration scheme. Moreover, the algorithm requires a low number of input parameters, is easily parallelizable and provides an elastic deformation on a point-by-point basis. Results have shown that relative to a pure CT-based registration, the intrinsic artifacts present in typical CBCT images only have a sub-millimeter impact on the accuracy and precision of the estimated deformation. In addition, the algorithm has low computational requirements, which are compatible with online image-based guidance of EBRT treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewsuk, K.G.; Cochran, R.J.; Blackwell, B.F.
The properties and performance of a ceramic component is determined by a combination of the materials from which it was fabricated and how it was processed. Most ceramic components are manufactured by dry pressing a powder/binder system in which the organic binder provides formability and green compact strength. A key step in this manufacturing process is the removal of the binder from the powder compact after pressing. The organic binder is typically removed by a thermal decomposition process in which heating rate, temperature, and time are the key process parameters. Empirical approaches are generally used to design the burnout time-temperaturemore » cycle, often resulting in excessive processing times and energy usage, and higher overall manufacturing costs. Ideally, binder burnout should be completed as quickly as possible without damaging the compact, while using a minimum of energy. Process and computational modeling offer one means to achieve this end. The objective of this study is to develop an experimentally validated computer model that can be used to better understand, control, and optimize binder burnout from green ceramic compacts.« less
NASA Astrophysics Data System (ADS)
Omar, M. A.; Parvataneni, R.; Zhou, Y.
2010-09-01
Proposed manuscript describes the implementation of a two step processing procedure, composed of the self-referencing and the Principle Component Thermography (PCT). The combined approach enables the processing of thermograms from transient (flash), steady (halogen) and selective (induction) thermal perturbations. Firstly, the research discusses the three basic processing schemes typically applied for thermography; namely mathematical transformation based processing, curve-fitting processing, and direct contrast based calculations. Proposed algorithm utilizes the self-referencing scheme to create a sub-sequence that contains the maximum contrast information and also compute the anomalies' depth values. While, the Principle Component Thermography operates on the sub-sequence frames by re-arranging its data content (pixel values) spatially and temporally then it highlights the data variance. The PCT is mainly used as a mathematical mean to enhance the defects' contrast thus enabling its shape and size retrieval. The results show that the proposed combined scheme is effective in processing multiple size defects in sandwich steel structure in real-time (<30 Hz) and with full spatial coverage, without the need for a priori defect-free area.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
Approaches of multilayer overlay process control for 28nm FD-SOI derivative applications
NASA Astrophysics Data System (ADS)
Duclaux, Benjamin; De Caunes, Jean; Perrier, Robin; Gatefait, Maxime; Le Gratiet, Bertrand; Chapon, Jean-Damien; Monget, Cédric
2018-03-01
Derivative technology like embedded Non-Volatile Memories (eNVM) is raising new types of challenges on the "more than Moore" path. By its construction: overlay is critical across multiple layers, by its running mode: usage of high voltage are stressing leakages and breakdown, and finally with its targeted market: Automotive, Industry automation, secure transactions… which are all requesting high device reliability (typically below 1ppm level). As a consequence, overlay specifications are tights, not only between one layer and its reference, but also among the critical layers sharing the same reference. This work describes a broad picture of the key points for multilayer overlay process control in the case of a 28nm FD-SOI technology and its derivative flows. First, the alignment trees of the different flow options have been optimized using a realistic process assumptions calculation for indirect overlay. Then, in the case of a complex alignment tree involving heterogeneous scanner toolset, criticality of tool matching between reference layer and critical layers of the flow has been highlighted. Improving the APC control loops of these multilayer dependencies has been studied with simulations of feed-forward as well as implementing new rework algorithm based on multi-measures. Finally, the management of these measurement steps raises some issues for inline support and using calculations or "virtual overlay" could help to gain some tool capability. A first step towards multilayer overlay process control has been taken.
Hasse, J U; Weingaertner, D E
2016-01-01
As the central product of the BMBF-KLIMZUG-funded Joint Network and Research Project (JNRP) 'dynaklim - Dynamic adaptation of regional planning and development processes to the effects of climate change in the Emscher-Lippe region (North Rhine Westphalia, Germany)', the Roadmap 2020 'Regional Climate Adaptation' has been developed by the various regional stakeholders and institutions containing specific regional scenarios, strategies and adaptation measures applicable throughout the region. This paper presents the method, elements and main results of this regional roadmap process by using the example of the thematic sub-roadmap 'Water Sensitive Urban Design 2020'. With a focus on the process support tool 'KlimaFLEX', one of the main adaptation measures of the WSUD 2020 roadmap, typical challenges for integrated climate change adaptation like scattered knowledge, knowledge gaps and divided responsibilities but also potential solutions and promising chances for urban development and urban water management are discussed. With the roadmap and the related tool, the relevant stakeholders of the Emscher-Lippe region have jointly developed important prerequisites to integrate their knowledge, to clarify vulnerabilities, adaptation goals, responsibilities and interests, and to foresightedly coordinate measures, resources, priorities and schedules for an efficient joint urban planning, well-grounded decision-making in times of continued uncertainties and step-by-step implementation of adaptation measures from now on.
Skjerdal, Taran; Gefferth, Andras; Spajic, Miroslav; Estanga, Edurne Gaston; de Cecare, Alessandra; Vitali, Silvia; Pasquali, Frederique; Bovo, Federica; Manfreda, Gerardo; Mancusi, Rocco; Trevisiani, Marcello; Tessema, Girum Tadesse; Fagereng, Tone; Moen, Lena Haugland; Lyshaug, Lars; Koidis, Anastasios; Delgado-Pando, Gonzalo; Stratakos, Alexandros Ch; Boeri, Marco; From, Cecilie; Syed, Hyat; Muccioli, Mirko; Mulazzani, Roberto; Halbert, Catherine
2017-01-01
A prototype decision support IT-tool for the food industry was developed in the STARTEC project. Typical processes and decision steps were mapped using real life production scenarios of participating food companies manufacturing complex ready-to-eat foods. Companies looked for a more integrated approach when making food safety decisions that would align with existing HACCP systems. The tool was designed with shelf life assessments and data on safety, quality, and costs, using a pasta salad meal as a case product. The process flow chart was used as starting point, with simulation options at each process step. Key parameters like pH, water activity, costs of ingredients and salaries, and default models for calculations of Listeria monocytogenes , quality scores, and vitamin C, were placed in an interactive database. Customization of the models and settings was possible on the user-interface. The simulation module outputs were provided as detailed curves or categorized as "good"; "sufficient"; or "corrective action needed" based on threshold limit values set by the user. Possible corrective actions were suggested by the system. The tool was tested and approved by end-users based on selected ready-to-eat food products. Compared to other decision support tools, the STARTEC-tool is product-specific and multidisciplinary and includes interpretation and targeted recommendations for end-users.
Gefferth, Andras; Spajic, Miroslav; Estanga, Edurne Gaston; Vitali, Silvia; Pasquali, Frederique; Bovo, Federica; Manfreda, Gerardo; Mancusi, Rocco; Tessema, Girum Tadesse; Fagereng, Tone; Moen, Lena Haugland; Lyshaug, Lars; Koidis, Anastasios; Delgado-Pando, Gonzalo; Stratakos, Alexandros Ch.; Boeri, Marco; From, Cecilie; Syed, Hyat; Muccioli, Mirko; Mulazzani, Roberto; Halbert, Catherine
2017-01-01
A prototype decision support IT-tool for the food industry was developed in the STARTEC project. Typical processes and decision steps were mapped using real life production scenarios of participating food companies manufacturing complex ready-to-eat foods. Companies looked for a more integrated approach when making food safety decisions that would align with existing HACCP systems. The tool was designed with shelf life assessments and data on safety, quality, and costs, using a pasta salad meal as a case product. The process flow chart was used as starting point, with simulation options at each process step. Key parameters like pH, water activity, costs of ingredients and salaries, and default models for calculations of Listeria monocytogenes, quality scores, and vitamin C, were placed in an interactive database. Customization of the models and settings was possible on the user-interface. The simulation module outputs were provided as detailed curves or categorized as “good”; “sufficient”; or “corrective action needed” based on threshold limit values set by the user. Possible corrective actions were suggested by the system. The tool was tested and approved by end-users based on selected ready-to-eat food products. Compared to other decision support tools, the STARTEC-tool is product-specific and multidisciplinary and includes interpretation and targeted recommendations for end-users. PMID:29457031
Digital-image processing and image analysis of glacier ice
Fitzpatrick, Joan J.
2013-01-01
This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.
Comparison of performance of some common Hartmann-Shack centroid estimation methods
NASA Astrophysics Data System (ADS)
Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.
2016-03-01
The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.
Attractor reconstruction for non-linear systems: a methodological note
Nichols, J.M.; Nichols, J.D.
2001-01-01
Attractor reconstruction is an important step in the process of making predictions for non-linear time-series and in the computation of certain invariant quantities used to characterize the dynamics of such series. The utility of computed predictions and invariant quantities is dependent on the accuracy of attractor reconstruction, which in turn is determined by the methods used in the reconstruction process. This paper suggests methods by which the delay and embedding dimension may be selected for a typical delay coordinate reconstruction. A comparison is drawn between the use of the autocorrelation function and mutual information in quantifying the delay. In addition, a false nearest neighbor (FNN) approach is used in minimizing the number of delay vectors needed. Results highlight the need for an accurate reconstruction in the computation of the Lyapunov spectrum and in prediction algorithms.
Computational experience with a three-dimensional rotary engine combustion model
NASA Astrophysics Data System (ADS)
Raju, M. S.; Willis, E. A.
1990-04-01
A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.
Computational experience with a three-dimensional rotary engine combustion model
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1990-01-01
A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.
O’Connell, Sandra; ÓLaighin, Gearóid; Kelly, Lisa; Murphy, Elaine; Beirne, Sorcha; Burke, Niall; Kilgannon, Orlaith; Quinlan, Leo R.
2016-01-01
Introduction Physical activity is a vitally important part of a healthy lifestyle, and is of major benefit to both physical and mental health. A daily step count of 10,000 steps is recommended globally to achieve an appropriate level of physical activity. Accurate quantification of physical activity during conditions reflecting those needed to achieve the recommended daily step count of 10,000 steps is essential. As such, we aimed to assess four commercial activity monitors for their sensitivity/accuracy in a prescribed walking route that reflects a range of surfaces that would typically be used to achieve the recommended daily step count, in two types of footwear expected to be used throughout the day when aiming to achieve the recommended daily step count, and in a timeframe required to do so. Methods Four commercial activity monitors were worn simultaneously by participants (n = 15) during a prescribed walking route reflective of surfaces typically encountered while achieving the daily recommended 10,000 steps. Activity monitors tested were the Garmin Vivofit ™, New Lifestyles’ NL-2000 ™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2) ™, and Fitbit One ™. Results All activity monitors tested were accurate in their step detection over the variety of different surfaces tested (natural lawn grass, gravel, ceramic tile, tarmacadam/asphalt, linoleum), when wearing both running shoes and hard-soled dress shoes. Conclusion All activity monitors tested were accurate in their step detection sensitivity and are valid monitors for physical activity quantification over the variety of different surfaces tested, when wearing both running shoes and hard-soled dress shoes, and over a timeframe necessary for accumulating the recommended daily step count of 10,000 steps. However, it is important to consider the accuracy of activity monitors, particularly when physical activity in the form of stepping activities is prescribed as an intervention in the treatment or prevention of a disease state. PMID:27167121
Gamalski, A. D.; Tersoff, J.; Stach, E. A.
2016-04-13
We study the growth of GaN nanowires from liquid Au–Ga catalysts using environmental transmission electron microscopy. GaN wires grow in either (11¯20) or (11¯00) directions, by the addition of {11¯00} double bilayers via step flow with multiple steps. Step-train growth is not typically seen with liquid catalysts, and we suggest that it results from low step mobility related to the unusual double-height step structure. Finally, the results here illustrate the surprising dynamics of catalytic GaN wire growth at the nanoscale and highlight striking differences between the growth of GaN and other III–V semiconductor nanowires.
Advancing the science of forensic data management
NASA Astrophysics Data System (ADS)
Naughton, Timothy S.
2002-07-01
Many individual elements comprise a typical forensics process. Collecting evidence, analyzing it, and using results to draw conclusions are all mutually distinct endeavors. Different physical locations and personnel are involved, juxtaposed against an acute need for security and data integrity. Using digital technologies and the Internet's ubiquity, these diverse elements can be conjoined using digital data as the common element. This result is a new data management process that can be applied to serve all elements of the community. The first step is recognition of a forensics lifecycle. Evidence gathering, analysis, storage, and use in legal proceedings are actually just distinct parts of a single end-to-end process, and thus, it is hypothesized that a single data system that can also accommodate each constituent phase using common network and security protocols. This paper introduces the idea of web-based Central Data Repository. Its cornerstone is anywhere, anytime Internet upload, viewing, and report distribution. Archives exist indefinitely after being created, and high-strength security and encryption protect data and ensure subsequent case file additions do not violate chain-of-custody or other handling provisions. Several legal precedents have been established for using digital information in courts of law, and in fact, effective prosecution of cyber crimes absolutely relies on its use. An example is a US Department of Agriculture division's use of digital images to back up its inspection process, with pictures and information retained on secure servers to enforce the Perishable Agricultural Commodities Act. Forensics is a cumulative process. Secure, web-based data management solutions, such as the Central Data Repository postulated here, can support each process step. Logically marrying digital technologies with Internet accessibility should help nurture a thought process to explore alternatives that make forensics data accessible to authorized individuals, whenever and wherever they need it.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
Caron, Jessica; Light, Janice; Drager, Kathryn
2016-01-01
Typically, the vocabulary in augmentative and alternative communication (AAC) technologies is pre-programmed by manufacturers or by parents and professionals outside of daily interactions. Because vocabulary needs are difficult to predict, young children who use aided AAC often do not have access to vocabulary concepts as the need and interest arises in their daily interactions, limiting their vocabulary acquisition and use. Ideally, parents and professionals would be able to add vocabulary to AAC technologies "just-in-time" as required during daily interactions. This study compared the effects of two AAC applications for mobile technologies: GoTalk Now (which required more programming steps) and EasyVSD (which required fewer programming steps) on the number of visual scene displays (VSDs) and hotspots created in 10-min interactions between eight professionals and preschool-aged children with typical development. The results indicated that, although all of the professionals were able to create VSDs and add vocabulary during interactions with the children, they created more VSDs and hotspots with the app with fewer programming steps than with the one with more steps, and child engagement and programming participation levels were high with both apps, but higher levels for both variables were observed with the app with fewer programming steps than with the one with more steps. These results suggest that apps with fewer programming steps may reduce operational demands and better support professionals to (a) respond to the child's input, (b) use just-in-time programming during interactions, (c) provide access to more vocabulary, and (d) increase participation.
Beginning the Principalship: A Practical Guide for New School Leaders. 2nd Edition.
ERIC Educational Resources Information Center
Daresh, John C.
This is a highly practical book for the first-year principal or for the assistant principal looking ahead to promotion. In place of abstract generalities, it offers real-life vignettes and scenarios that may be faced by typical first-year principals. The skill checklists are intended to be realistic and unintimidating. Step-by-step explanations…
ERIC Educational Resources Information Center
Penfield, Randall D.; Alvarez, Karina; Lee, Okhee
2009-01-01
The assessment of differential item functioning (DIF) in polytomous items addresses between-group differences in measurement properties at the item level, but typically does not inform which score levels may be involved in the DIF effect. The framework of differential step functioning (DSF) addresses this issue by examining between-group…
Czepiel, Jacek; Leśniak, Maciej R.; Garlicki, Aleksander; Mach, Tomasz
2012-01-01
Lyme disease is a multi-organ animal-borne disease, caused by spirochetes of Borrelia burgdorferi (Bb), which typically affect the skin, nervous system, musculoskeletal system and heart. A history of confirmed exposure to tick bites, typical signs and symptoms of Lyme borreliosis and positive tests for anti-Bb antibodies, are the basis of a diagnosis. A two-step diagnosis is necessary: the first step is based on a high sensitivity ELISA test with positive results confirmed by a more specific Western blot assay. Antibiotic therapy is curative in most cases, but some patients develop chronic symptoms, which do not respond to antibiotics. The aim of this review is to summarize our current knowledge of the symptoms, clinical diagnosis and treatment of Lyme borreliosis. PMID:23319969
Ion implanted dielectric elastomer circuits
NASA Astrophysics Data System (ADS)
O'Brien, Benjamin M.; Rosset, Samuel; Anderson, Iain A.; Shea, Herbert R.
2013-06-01
Starfish and octopuses control their infinite degree-of-freedom arms with panache—capabilities typical of nature where the distribution of reflex-like intelligence throughout soft muscular networks greatly outperforms anything hard, heavy, and man-made. Dielectric elastomer actuators show great promise for soft artificial muscle networks. One way to make them smart is with piezo-resistive Dielectric Elastomer Switches (DES) that can be combined with artificial muscles to create arbitrary digital logic circuits. Unfortunately there are currently no reliable materials or fabrication process. Thus devices typically fail within a few thousand cycles. As a first step in the search for better materials we present a preliminary exploration of piezo-resistors made with filtered cathodic vacuum arc metal ion implantation. DES were formed on polydimethylsiloxane silicone membranes out of ion implanted gold nano-clusters. We propose that there are four distinct regimes (high dose, above percolation, on percolation, low dose) in which gold ion implanted piezo-resistors can operate and present experimental results on implanted piezo-resistors switching high voltages as well as a simple artificial muscle inverter. While gold ion implanted DES are limited by high hysteresis and low sensitivity, they already show promise for a range of applications including hysteretic oscillators and soft generators. With improvements to implanter process control the promise of artificial muscle circuitry for soft smart actuator networks could become a reality.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
Henry, Stephen G; Czarnecki, Danielle; Kahn, Valerie C; Chou, Wen-Ying Sylvia; Fagerlin, Angela; Ubel, Peter A; Rovner, David R; Alexander, Stewart C; Knight, Sara J; Holmes-Rovner, Margaret
2015-10-01
We know little about patient-physician communication during visits to discuss diagnosis and treatment of prostate cancer. To examine the overall visit structure and how patients and physicians transition between communication activities during visits in which patients received new prostate cancer diagnoses. Forty veterans and 18 urologists at one VA medical centre. We coded 40 transcripts to identify major communication activities during visits and used empiric discourse analysis to analyse transitions between activities. We identified five communication activities that occurred in the following typical sequence: 'diagnosis delivery', 'risk classification', 'options talk', 'decision talk' and 'next steps'. The first two activities were typically brief and involved minimal patient participation. Options talk was typically the longest activity; physicians explicitly announced the beginning of options talk and framed it as their professional responsibility. Some patients were unsure of the purpose of visit and/or who should make treatment decisions. Visits to deliver the diagnosis of early stage prostate cancer follow a regular sequence of communication activities. Physicians focus on discussing treatment options and devote comparatively little time and attention to discussing the new cancer diagnosis. Towards the goal of promoting patient-centred communication, physicians should consider eliciting patient reactions after diagnosis delivery and explaining the decision-making process before describing treatment options. © 2013 John Wiley & Sons Ltd.
Model for Simulating a Spiral Software-Development Process
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Curley, Charles; Nayak, Umanath
2010-01-01
A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.
Applications of Tunable TiO2 Nanotubes as Nanotemplate and Photovoltaic Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongdong; Chang, Pai-Chun; Chien, Chung-Jen
2010-10-26
Highly ordered anodic titanium oxide (ATO) TiO{sub 2} nanotube film has been synthesized via a typical two-step anodization method. Following a reductive doping approach, metallic materials (copper and nickel) can be efficiently electrodeposited into the nanotubes. This versatile process yields reproducible tubular structures in ATO membranes, because of the conductive nature of crystallized TiO{sub 2}, yielding promising potential for nanotemplate applications. In this paper, we present a dye-sensitized solar cell constructed by employing such ATO films. It is observed that the reductive doping treatment can also enhance the solar cell’s short current density and fill factor, resulting in an improvedmore » energy conversion efficiency.« less
Synthesis and ferroelectric properties of La-substituted PZFNT
NASA Astrophysics Data System (ADS)
Singh, Pratibha; Singh, Sangeeta; Juneja, J. K.; Prakash, Chandra; Raina, K. K.; Kumar, Vinod; Pant, R. P.
2010-01-01
In this paper we are reporting a systematic study on ferroelectric properties of lanthanum (La) substituted modified lead zirconate titanate (PLZFNT) ceramics which were fabricated by mixed oxide process. La contents were varied in between 0 and 0.01 in steps of 0.0025. The X-ray diffraction study shows single phase for all samples. Silver electrode was deposited on flat surfaces of sintered discs for P-E (polarization vs. electric field) measurements. All compositions exhibited well-defined ferroelectric behavior at room temperature. Hysteresis loops were also recorded at different temperatures for all the compositions which showed typical variation of ferroelectric nature. The PLZFNT composition with 1 mol% of La showed the best retention behavior. The results are discussed.
The Value of SysML Modeling During System Operations: A Case Study
NASA Technical Reports Server (NTRS)
Dutenhoffer, Chelsea; Tirona, Joseph
2013-01-01
System models are often touted as engineering tools that promote better understanding of systems, but these models are typically created during system design. The Ground Data System (GDS) team for the Dawn spacecraft took on a case study to see if benefits could be achieved by starting a model of a system already in operations. This paper focuses on the four steps the team undertook in modeling the Dawn GDS: defining a model structure, populating model elements, verifying that the model represented reality, and using the model to answer system-level questions and simplify day-to-day tasks. Throughout this paper the team outlines our thought processes and the system insights the model provided.
Tonti, Simone; Di Cataldo, Santa; Bottino, Andrea; Ficarra, Elisa
2015-03-01
The automatization of the analysis of Indirect Immunofluorescence (IIF) images is of paramount importance for the diagnosis of autoimmune diseases. This paper proposes a solution to one of the most challenging steps of this process, the segmentation of HEp-2 cells, through an adaptive marker-controlled watershed approach. Our algorithm automatically conforms the marker selection pipeline to the peculiar characteristics of the input image, hence it is able to cope with different fluorescent intensities and staining patterns without any a priori knowledge. Furthermore, it shows a reduced sensitivity to over-segmentation errors and uneven illumination, that are typical issues of IIF imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.
Generating Models of Surgical Procedures using UMLS Concepts and Multiple Sequence Alignment
Meng, Frank; D’Avolio, Leonard W.; Chen, Andrew A.; Taira, Ricky K.; Kangarloo, Hooshang
2005-01-01
Surgical procedures can be viewed as a process composed of a sequence of steps performed on, by, or with the patient’s anatomy. This sequence is typically the pattern followed by surgeons when generating surgical report narratives for documenting surgical procedures. This paper describes a methodology for semi-automatically deriving a model of conducted surgeries, utilizing a sequence of derived Unified Medical Language System (UMLS) concepts for representing surgical procedures. A multiple sequence alignment was computed from a collection of such sequences and was used for generating the model. These models have the potential of being useful in a variety of informatics applications such as information retrieval and automatic document generation. PMID:16779094
The value of SysML modeling during system operations: A case study
NASA Astrophysics Data System (ADS)
Dutenhoffer, C.; Tirona, J.
System models are often touted as engineering tools that promote better understanding of systems, but these models are typically created during system design. The Ground Data System (GDS) team for the Dawn spacecraft took on a case study to see if benefits could be achieved by starting a model of a system already in operations. This paper focuses on the four steps the team undertook in modeling the Dawn GDS: defining a model structure, populating model elements, verifying that the model represented reality, and using the model to answer system-level questions and simplify day-to-day tasks. Throughout this paper the team outlines our thought processes and the system insights the model provided.
Annotating images by mining image search results.
Wang, Xin-Jing; Zhang, Lei; Li, Xirong; Ma, Wei-Ying
2008-11-01
Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search results. Some 2.4 million images with their surrounding text are collected from a few photo forums to support this approach. The entire process is formulated in a divide-and-conquer framework where a query keyword is provided along with the uncaptioned image to improve both the effectiveness and efficiency. This is helpful when the collected data set is not dense everywhere. In this sense, our approach contains three steps: 1) the search process to discover visually and semantically similar search results, 2) the mining process to identify salient terms from textual descriptions of the search results, and 3) the annotation rejection process to filter out noisy terms yielded by Step 2. To ensure real-time annotation, two key techniques are leveraged-one is to map the high-dimensional image visual features into hash codes, the other is to implement it as a distributed system, of which the search and mining processes are provided as Web services. As a typical result, the entire process finishes in less than 1 second. Since no training data set is required, our approach enables annotating with unlimited vocabulary and is highly scalable and robust to outliers. Experimental results on both real Web images and a benchmark image data set show the effectiveness and efficiency of the proposed algorithm. It is also worth noting that, although the entire approach is illustrated within the divide-and conquer framework, a query keyword is not crucial to our current implementation. We provide experimental results to prove this.
2011-01-01
This paper reports a systematic study of the level of flavan-3-ol monomers during typical processing steps as cacao beans are dried, fermented and roasted and the results of Dutch-processing. Methods have been used that resolve the stereoisomers of epicatechin and catechin. In beans harvested from unripe and ripe cacao pods, we find only (-)-epicatechin and (+)-catechin with (-)-epicatechin being by far the predominant isomer. When beans are fermented there is a large loss of both (-)-epicatechin and (+)-catechin, but also the formation of (-)-catechin. We hypothesize that the heat of fermentation may, in part, be responsible for the formation of this enantiomer. When beans are progressively roasted at conditions described as low, medium and high roast conditions, there is a progressive loss of (-)-epicatechin and (+)-catechin and an increase in (-)-catechin with the higher roast levels. When natural and Dutch-processed cacao powders are analyzed, there is progressive loss of both (-)-epicatechin and (+)-catechin with lesser losses of (-)-catechin. We thus observe that in even lightly Dutch-processed powder, the level of (-)-catechin exceeds the level of (-)-epicatechin. The results indicate that much of the increase in the level of (-)-catechin observed during various processing steps may be the result of heat-related epimerization from (-)-epicatechin. These results are discussed with reference to the reported preferred order of absorption of (-)-epicatechin > (+)-catechin > (-)-catechin. These results are also discussed with respect to the balance that must be struck between the beneficial impact of fermentation and roasting on chocolate flavor and the healthful benefits of chocolate and cocoa powder that result in part from the flavan-3-ol monomers. PMID:21917164
Burns, K E; Haysom, H E; Higgins, A M; Waters, N; Tahiri, R; Rushford, K; Dunstan, T; Saxby, K; Kaplan, Z; Chunilal, S; McQuilten, Z K; Wood, E M
2018-04-10
To describe the methodology to estimate the total cost of administration of a single unit of red blood cells (RBC) in adults with beta thalassaemia major in an Australian specialist haemoglobinopathy centre. Beta thalassaemia major is a genetic disorder of haemoglobin associated with multiple end-organ complications and typically requiring lifelong RBC transfusion therapy. New therapeutic agents are becoming available based on advances in understanding of the disorder and its consequences. Assessment of the true total cost of transfusion, incorporating both product and activity costs, is required in order to evaluate the benefits and costs of these new therapies. We describe the bottom-up, time-driven, activity-based costing methodology used to develop process maps to provide a step-by-step outline of the entire transfusion pathway. Detailed flowcharts for each process are described. Direct observations and timing of the process maps document all activities, resources, staff, equipment and consumables in detail. The analysis will include costs associated with performing these processes, including resources and consumables. Sensitivity analyses will be performed to determine the impact of different staffing levels, timings and probabilities associated with performing different tasks. Thirty-one process maps have been developed, with over 600 individual activities requiring multiple timings. These will be used for future detailed cost analyses. Detailed process maps using bottom-up, time-driven, activity-based costing for determining the cost of RBC transfusion in thalassaemia major have been developed. These could be adapted for wider use to understand and compare the costs and complexities of transfusion in other settings. © 2018 British Blood Transfusion Society.
Hurst, W Jeffrey; Krake, Susann H; Bergmeier, Stephen C; Payne, Mark J; Miller, Kenneth B; Stuart, David A
2011-09-14
This paper reports a systematic study of the level of flavan-3-ol monomers during typical processing steps as cacao beans are dried, fermented and roasted and the results of Dutch-processing. Methods have been used that resolve the stereoisomers of epicatechin and catechin. In beans harvested from unripe and ripe cacao pods, we find only (-)-epicatechin and (+)-catechin with (-)-epicatechin being by far the predominant isomer. When beans are fermented there is a large loss of both (-)-epicatechin and (+)-catechin, but also the formation of (-)-catechin. We hypothesize that the heat of fermentation may, in part, be responsible for the formation of this enantiomer. When beans are progressively roasted at conditions described as low, medium and high roast conditions, there is a progressive loss of (-)-epicatechin and (+)-catechin and an increase in (-)-catechin with the higher roast levels. When natural and Dutch-processed cacao powders are analyzed, there is progressive loss of both (-)-epicatechin and (+)-catechin with lesser losses of (-)-catechin. We thus observe that in even lightly Dutch-processed powder, the level of (-)-catechin exceeds the level of (-)-epicatechin. The results indicate that much of the increase in the level of (-)-catechin observed during various processing steps may be the result of heat-related epimerization from (-)-epicatechin. These results are discussed with reference to the reported preferred order of absorption of (-)-epicatechin > (+)-catechin > (-)-catechin. These results are also discussed with respect to the balance that must be struck between the beneficial impact of fermentation and roasting on chocolate flavor and the healthful benefits of chocolate and cocoa powder that result in part from the flavan-3-ol monomers.
Semiautomatic Segmentation of Glioma on Mobile Devices.
Wu, Ya-Ping; Lin, Yu-Song; Wu, Wei-Guo; Yang, Cong; Gu, Jian-Qin; Bai, Yan; Wang, Mei-Yun
2017-01-01
Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach.
A structural model of PpoA derived from SAXS-analysis-implications for substrate conversion.
Koch, Christian; Tria, Giancarlo; Fielding, Alistair J; Brodhun, Florian; Valerius, Oliver; Feussner, Kirstin; Braus, Gerhard H; Svergun, Dmitri I; Bennati, Marina; Feussner, Ivo
2013-09-01
In plants and mammals, oxylipins may be synthesized via multi step processes that consist of dioxygenation and isomerization of the intermediately formed hydroperoxy fatty acid. These processes are typically catalyzed by two distinct enzyme classes: dioxygenases and cytochrome P450 enzymes. In ascomycetes biosynthesis of oxylipins may proceed by a similar two-step pathway. An important difference, however, is that both enzymatic activities may be combined in a single bifunctional enzyme. These types of enzymes are named Psi-factor producing oxygenases (Ppo). Here, the spatial organization of the two domains of PpoA from Aspergillus nidulans was analyzed by small-angle X-ray scattering and the obtained data show that the enzyme exhibits a relatively flat trimeric shape. Atomic structures of the single domains were obtained by template-based structure prediction and docked into the enzyme envelope of the low resolution structure obtained by SAXS. EPR-based distance measurements between the tyrosyl radicals formed in the activated dioxygenase domain of the enzyme supported the trimeric structure obtained from SAXS and the previous assignment of Tyr374 as radical-site in PpoA. Furthermore, two phenylalanine residues in the cytochrome P450 domain were shown to modulate the specificity of hydroperoxy fatty acid rearrangement. Copyright © 2013 Elsevier B.V. All rights reserved.
Design and optimization of reverse-transcription quantitative PCR experiments.
Tichopad, Ales; Kitchen, Rob; Riedmaier, Irmgard; Becker, Christiane; Ståhlberg, Anders; Kubista, Mikael
2009-10-01
Quantitative PCR (qPCR) is a valuable technique for accurately and reliably profiling and quantifying gene expression. Typically, samples obtained from the organism of study have to be processed via several preparative steps before qPCR. We estimated the errors of sample withdrawal and extraction, reverse transcription (RT), and qPCR that are introduced into measurements of mRNA concentrations. We performed hierarchically arranged experiments with 3 animals, 3 samples, 3 RT reactions, and 3 qPCRs and quantified the expression of several genes in solid tissue, blood, cell culture, and single cells. A nested ANOVA design was used to model the experiments, and relative and absolute errors were calculated with this model for each processing level in the hierarchical design. We found that intersubject differences became easily confounded by sample heterogeneity for single cells and solid tissue. In cell cultures and blood, the noise from the RT and qPCR steps contributed substantially to the overall error because the sampling noise was less pronounced. We recommend the use of sample replicates preferentially to any other replicates when working with solid tissue, cell cultures, and single cells, and we recommend the use of RT replicates when working with blood. We show how an optimal sampling plan can be calculated for a limited budget. .
Method Developed for Improving the Thermomechanical Properties of Silicon Carbide Matrix Composites
NASA Technical Reports Server (NTRS)
Bhatt, Ramakrishna T.; DiCarlo, James A.
2004-01-01
Today, a major thrust for achieving engine components with improved thermal capability is the development of fiber-reinforced silicon-carbide (SiC) matrix composites. These materials are not only lighter and capable of higher use temperatures than state-of-the-art metallic alloys and oxide matrix composites (approx. 1100 C), but they can provide significantly better static and dynamic toughness than unreinforced silicon-based monolithic ceramics. However, for successful application in advanced engine systems, the SiC matrix composites should be able to withstand component service stresses and temperatures for the desired component lifetime. Since the high-temperature structural life of ceramic materials is typically controlled by creep-induced flaw growth, a key composite property requirement is the ability to display high creep resistance under these conditions. Also, because of the possibility of severe thermal gradients in the components, the composites should provide maximum thermal conductivity to minimize the development of thermal stresses. State-of-the-art SiC matrix composites are typically fabricated via a three-step process: (1) fabrication of a component-shaped architectural preform reinforced by high-performance fibers, (2) chemical vapor infiltration of a fiber coating material such as boron nitride (BN) into the preform, and (3) infiltration of a SiC matrix into the remaining porous areas in the preform. Generally, the highest performing composites have matrices fabricated by the CVI process, which produces a SiC matrix typically more thermally stable and denser than matrices formed by other approaches. As such, the CVI SiC matrix is able to provide better environmental protection to the coated fibers, plus provide the composite with better resistance to crack propagation. Also, the denser CVI SiC matrix should provide optimal creep resistance and thermal conductivity to the composite. However, for adequate preform infiltration, the CVI SiC matrix process typically has to be conducted at temperatures below 1100 C, which results in a SiC matrix that is fairly dense, but contains metastable atomic defects and is nonstoichiometric because of a small amount of excess silicon. Because these defects typically exist at the matrix grain boundaries, they can scatter thermal phonons and degrade matrix creep resistance by enhancing grain-boundary sliding. To eliminate these defects and improve the thermomechanical properties of ceramic composites with CVI SiC matrices, researchers at the NASA Glenn Research Center developed a high-temperature treatment process that can be used after the CVI SiC matrix is deposited into the fiber preform.
Crystallization Physics in Biomacromolecular Systems
NASA Technical Reports Server (NTRS)
Chernov, A. A.
2003-01-01
The crystals are built of molecules of protein, nucleic acid and their complexes, like viruses, approx. 5x10(exp 3)+ 3x10(exp 6) Da in weight and 2 + 20 nm in effective diameter. This size strongly exceeds action range of molecular forces and makes a big difference with inorganic crystals. Intermolecular contacts form patches on the biomacromolecular surface. Each patch may occupy only a small percent of the whole surface and vary from polymorph to polymorph of the same protein. Thus, under different conditions (pH, solution chemistry, temperature, any area on the macromolecular surface may form a contact. The crystal Young moduli, E approx. equals 0.1 + 0.5 GPa are more than 10 times lower than that of inorganics and the biomolecules themselves. Water within biocrystals (30-70%) is unable to flow unless typical deformation time is longer than approx. 10(exp -5)s. This explains the discrepancy between light scattering and static measurements of E. Nucleation and Growth requires typically concentrations exceeding the equilibrium ones up to 100 times - because of the new size scale results in 10 - 10(exp 3) times lower kinetic coefficients than that needed for inorganic solution growth. All phenomena observed in the latter occur with protein crystallization and are even better studied by AFM. Crystals are typically facetted. Among unexpected findings of general significance are - net molecular exchange flux at kinks is much lower than that expected from supersaturation, steps with low (< approx. 10(exp -2)) kink density at steps follow Gibbs-Thomson law only at very low supersaturations, step segment growth rate may be independent of step energy. Crystal perfection is a must of biocrystallization to achieve the major goal to find 3-D atomic structure of biomacromolecules by x-ray diffraction. Poor diffraction resolution (> 3Angstrom) makes crystallization a bottleneck for structural biology. All defects typical of small molecule crystals are found in biocrystals, but the defects responsible for poor resolution are not identified. Conformational changes are one of them. Biocrystallization in microgravity reportedly results in 20% cases of better crystals. The mechanism of how lack of convection can do this is still not clear. Lower supersaturation, self-purification &om preferentially trapped homologous impurities and step bunching are viable hypotheses.
NASA Astrophysics Data System (ADS)
de Pascale, P.; Vasile, M.; Casotto, S.
The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.
ERIC Educational Resources Information Center
Wilder, David A.; Atwell, Julie; Wine, Byron
2006-01-01
The effects of three levels of treatment integrity (100%, 50%, and 0%) on child compliance were evaluated in the context of the implementation of a three-step prompting procedure. Two typically developing preschool children participated in the study. After baseline data on compliance to one of three common demands were collected, a therapist…
40 CFR 86.1310-90 - Exhaust gas sampling and analytical system; diesel engines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the port entrance to the analyzer to within 90 percent of the step change. (B) 20 seconds from an instantaneous step change at the entrance to the sample probe or overflow span gas port to within 90 percent of... avoid moisture condensation. A filter pair loading of 1 mg is typically proportional to a 0.1 g/bhp-hr...
40 CFR 86.1310-90 - Exhaust gas sampling and analytical system; diesel engines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the port entrance to the analyzer to within 90 percent of the step change. (B) 20 seconds from an instantaneous step change at the entrance to the sample probe or overflow span gas port to within 90 percent of... avoid moisture condensation. A filter pair loading of 1 mg is typically proportional to a 0.1 g/bhp-hr...
ERIC Educational Resources Information Center
Stanton, Michael
1991-01-01
Describes various ways to practice law such as private practice, corporate law, public interest law, and government law. Talks about salaries, promotion potential, workload, and typical days for lawyers. (JOW)
Dixit, R; Trivedi, P K; Nath, P; Sane, P V
1999-09-01
Chloroplast genes are typically organized into polycistronic transcription units that give rise to complex sets of mono- and oligo-cistronic overlapping RNAs through a series of processing steps. The psbB operon contains genes for the PSII (psbB, psbT, psbH) and cytochrome b(6)f (petB and petD) complexes which are needed in different amounts during chloroplast biogenesis. The functional significance of gene organization in this polycistronic unit, containing information for two different complexes, is not known and is of interest. To determine the organization and expression of these complexes, studies have been carried out on crop plants by different groups, but not much information is known about trees. We present the nucleotide sequences of PSII genes and RNA profiles of the genes located in the psbB operon from Populus deltoides, a tree species. Although the gene organization of this operon in P. deltoides is similar to that in other species, a few variations have been observed in the processing scheme.
Characterization of Non-Infectious Virus-Like Particle Surrogates for Viral Clearance Applications.
Johnson, Sarah; Brorson, Kurt A; Frey, Douglas D; Dhar, Arun K; Cetlin, David A
2017-09-01
Viral clearance is a critical aspect of biopharmaceutical manufacturing process validation. To determine the viral clearance efficacy of downstream chromatography and filtration steps, live viral "spiking" studies are conducted with model mammalian viruses such as minute virus of mice (MVM). However, due to biosafety considerations, spiking studies are costly and typically conducted in specialized facilities. In this work, we introduce the concept of utilizing a non-infectious MVM virus-like particle (MVM-VLP) as an economical surrogate for live MVM during process development and characterization. Through transmission electron microscopy, size exclusion chromatography with multi-angle light scattering, chromatofocusing, and a novel solute surface hydrophobicity assay, we examined and compared the size, surface charge, and hydrophobic properties of MVM and MVM-VLP. The results revealed that MVM and MVM-VLP exhibited nearly identical physicochemical properties, indicating the potential utility of MVM-VLP as an accurate and economical surrogate to live MVM during chromatography and filtration process development and characterization studies.
Electrical property of macroscopic graphene composite fibers prepared by chemical vapor deposition.
Sun, Haibin; Fu, Can; Gao, Yanli; Guo, Pengfei; Wang, Chunlei; Yang, Wenchao; Wang, Qishang; Zhang, Chongwu; Wang, Junya; Xu, Junqi
2018-07-27
Graphene fibers are promising candidates in portable and wearable electronics due to their tiny volume, flexibility and wearability. Here, we successfully synthesized macroscopic graphene composite fibers via a two-step process, i.e. first electrospinning and then chemical vapor deposition (CVD). Briefly, the well-dispersed PAN nanofibers were sprayed onto the copper surface in an electrified thin liquid jet by electrospinning. Subsequently, CVD growth process induced the formation of graphene films using a PAN-solid source of carbon and a copper catalyst. Finally, crumpled and macroscopic graphene composite fibers were obtained from carbon nanofiber/graphene composite webs by self-assembly process in the deionized water. Temperature-dependent conduct behavior reveals that electron transport of the graphene composite fibers belongs to hopping mechanism and the typical electrical conductivity reaches 4.59 × 10 3 S m -1 . These results demonstrated that the graphene composite fibers are promising for the next-generation flexible and wearable electronics.
Practical use of video imagery in nearshore oceanographic field studies
Holland, K.T.; Holman, R.A.; Lippmann, T.C.; Stanley, J.; Plant, N.
1997-01-01
An approach was developed for using video imagery to quantify, in terms of both spatial and temporal dimensions, a number of naturally occurring (nearshore) physical processes. The complete method is presented, including the derivation of the geometrical relationships relating image and ground coordinates, principles to be considered when working with video imagery and the two-step strategy for calibration of the camera model. The techniques are founded on the principles of photogrammetry, account for difficulties inherent in the use of video signals, and have been adapted to allow for flexibility of use in field studies. Examples from field experiments indicate that this approach is both accurate and applicable under the conditions typically experienced when sampling in coastal regions. Several applications of the camera model are discussed, including the measurement of nearshore fluid processes, sand bar length scales, foreshore topography, and drifter motions. Although we have applied this method to the measurement of nearshore processes and morphologic features, these same techniques are transferable to studies in other geophysical settings.
Paszko, Tadeusz; Jankowska, Monika
2018-06-18
Laboratory adsorption and degradation studies were carried out to determine the effect of time-dependent adsorption on propiconazole degradation rates in samples from three Polish Luvisols. Strong propiconazole adsorption (organic carbon normalized adsorption coefficients K oc in the range of 1217-7777 mL/g) was observed in batch experiments, with a typical biphasic mechanism with a fast initial step followed by the time-dependent step, which finished within 48 h in the majority of soils. The time-dependent step observed in incubation experiments was longer (duration from 5 to 23 d), and its contribution to total adsorption was from 20% to 34%. The half-lives obtained at 25 °C and 40% maximum water holding capacity of soil, were in the range of 34.7-112.9 d in the Ap horizon and in the range of 42.3-448.8 d for subsoils. The very strong correlations, between degradation rates in pore water and soil organic carbon and soil microbial activity, indicated that microbial degradation of propiconazole was most likely the only significant process responsible for the decay of this compound under aerobic conditions for the whole of the examined soil profiles. Modeling of the processes showed that only models coupling adsorption and degradation were able to correctly describe the experimental data. The analysis of the bioavailability factor values showed that degradation was not limited by the rate of propiconazole desorption from soil, but sorption affected the degradation rate by decreasing its availability for microorganisms. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Andersen, Mie; Plaisance, Craig P.; Reuter, Karsten
2017-10-01
First-principles screening studies aimed at predicting the catalytic activity of transition metal (TM) catalysts have traditionally been based on mean-field (MF) microkinetic models, which neglect the effect of spatial correlations in the adsorbate layer. Here we critically assess the accuracy of such models for the specific case of CO methanation over stepped metals by comparing to spatially resolved kinetic Monte Carlo (kMC) simulations. We find that the typical low diffusion barriers offered by metal surfaces can be significantly increased at step sites, which results in persisting correlations in the adsorbate layer. As a consequence, MF models may overestimate the catalytic activity of TM catalysts by several orders of magnitude. The potential higher accuracy of kMC models comes at a higher computational cost, which can be especially challenging for surface reactions on metals due to a large disparity in the time scales of different processes. In order to overcome this issue, we implement and test a recently developed algorithm for achieving temporal acceleration of kMC simulations. While the algorithm overall performs quite well, we identify some challenging cases which may lead to a breakdown of acceleration algorithms and discuss possible directions for future algorithm development.
Dual salt precipitation for the recovery of a recombinant protein from Escherichia coli.
Balasundaram, Bangaru; Sachdeva, Soam; Bracewell, Daniel G
2011-01-01
When considering worldwide demand for biopharmaceuticals, it becomes necessary to consider alternative process strategies to improve the economics of manufacturing such molecules. To address this issue, the current study investigates precipitation to selectively isolate the product or remove contaminants and thus assist the initial purification of a intracellular protein. The hypothesis tested was that the combination of two or more precipitating agents will alter the solubility profile of the product through synergistic or antagonistic effects. This principle was investigated through several combinations of ammonium sulfate and sodium citrate at different ratios. A synergistic effect mediated by a known electrostatic interaction of citrate ions with Fab' in addition to the typical salting-out effects was observed. On the basis of the results of the solubility studies, a two step primary recovery route was investigated. In the first step termed conditioning, post-homogenization and before clarification, addition of 0.8 M ammonium sulfate extracted 30% additional product. Clarification performance measured using a scale-down disc stack centrifugation mimic determined a four-fold reduction in centrifuge size requirements. Dual salt precipitation in the second step resulted in >98% recovery of Fab' while removing 36% of the contaminant proteins simultaneously. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
A Semi-Empirical Two Step Carbon Corrosion Reaction Model in PEM Fuel Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Alan; Colbow, Vesna; Harvey, David
2013-01-01
The cathode CL of a polymer electrolyte membrane fuel cell (PEMFC) was exposed to high potentials, 1.0 to 1.4 V versus a reversible hydrogen electrode (RHE), that are typically encountered during start up/shut down operation. While both platinum dissolution and carbon corrosion occurred, the carbon corrosion effects were isolated and modeled. The presented model separates the carbon corrosion process into two reaction steps; (1) oxidation of the carbon surface to carbon-oxygen groups, and (2) further corrosion of the oxidized surface to carbon dioxide/monoxide. To oxidize and corrode the cathode catalyst carbon support, the CL was subjected to an accelerated stressmore » test cycled the potential from 0.6 VRHE to an upper potential limit (UPL) ranging from 0.9 to 1.4 VRHE at varying dwell times. The reaction rate constants and specific capacitances of carbon and platinum were fitted by evaluating the double layer capacitance (Cdl) trends. Carbon surface oxidation increased the Cdl due to increased specific capacitance for carbon surfaces with carbon-oxygen groups, while the second corrosion reaction decreased the Cdl due to loss of the overall carbon surface area. The first oxidation step differed between carbon types, while both reaction rate constants were found to have a dependency on UPL, temperature, and gas relative humidity.« less
On numerical reconstructions of lithographic masks in DUV scatterometry
NASA Astrophysics Data System (ADS)
Henn, M.-A.; Model, R.; Bär, M.; Wurm, M.; Bodermann, B.; Rathsfeld, A.; Gross, H.
2009-06-01
The solution of the inverse problem in scatterometry employing deep ultraviolet light (DUV) is discussed, i.e. we consider the determination of periodic surface structures from light diffraction patterns. With decreasing dimensions of the structures on photo lithography masks and wafers, increasing demands on the required metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line structures in order to determine the sidewall angles, heights, and critical dimensions (CD), i.e., the top and bottom widths. The latter quantities are typically in the range of tens of nanometers. All these angles, heights, and CDs are the fundamental figures in order to evaluate the quality of the manufacturing process. To measure those quantities a DUV scatterometer is used, which typically operates at a wavelength of 193 nm. The diffraction of light by periodic 2D structures can be simulated using the finite element method for the Helmholtz equation. The corresponding inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Fixing the class of gratings and the set of measurements, this inverse problem reduces to a finite dimensional nonlinear operator equation. Reformulating the problem as an optimization problem, a vast number of numerical schemes can be applied. Our tool is a sequential quadratic programing (SQP) variant of the Gauss-Newton iteration. In a first step, in which we use a simulated data set, we investigate how accurate the geometrical parameters of an EUV mask can be reconstructed, using light in the DUV range. We then determine the expected uncertainties of geometric parameters by reconstructing from simulated input data perturbed by noise representing the estimated uncertainties of input data. In the last step, we use the measurement data obtained from the new DUV scatterometer at PTB to determine the geometrical parameters of a typical EUV mask with our reconstruction algorithm. The results are compared to the outcome of investigations with two alternative methods namely EUV scatterometry and SEM measurements.
Saxena, Udit; Allan, Chris; Allen, Prudence
2017-06-01
Previous studies have suggested elevated reflex thresholds in children with auditory processing disorders (APDs). However, some aspects of the child's ear such as ear canal volume and static compliance of the middle ear could possibly affect the measurements of reflex thresholds and thus impact its interpretation. Sound levels used to elicit reflexes in a child's ear may be higher than predicted by calibration in a standard 2-cc coupler, and lower static compliance could make visualization of very small changes in impedance at threshold difficult. For this purpose, it is important to evaluate threshold data with consideration of differences between children and adults. A set of studies were conducted. The first compared reflex thresholds obtained using standard clinical procedures in children with suspected APD to that of typically developing children and adults to test the replicability of previous studies. The second study examined the impact of ear canal volume on estimates of reflex thresholds by applying real-ear corrections. Lastly, the relationship between static compliance and reflex threshold estimates was explored. The research is a set of case-control studies with a repeated measures design. The first study included data from 20 normal-hearing adults, 28 typically developing children, and 66 children suspected of having an APD. The second study included 28 normal-hearing adults and 30 typically developing children. In the first study, crossed and uncrossed reflex thresholds were measured in 5-dB step size. Reflex thresholds were analyzed using repeated measures analysis of variance (RM-ANOVA). In the second study, uncrossed reflex thresholds, real-ear correction, ear canal volume, and static compliance were measured. Reflex thresholds were measured using a 1-dB step size. The effect of real-ear correction and static compliance on reflex threshold was examined using RM-ANOVA and Pearson correlation coefficient, respectively. Study 1 replicated previous studies showing elevated reflex thresholds in many children with suspected APD when compared to data from adults using standard clinical procedures, especially in the crossed condition. The thresholds measured in children with suspected APD tended to be higher than those measured in the typically developing children. There were no significant differences between the typically developing children and adults. However, when real-ear calibrated stimulus levels were used, it was found that children's thresholds were elicited at higher levels than in the adults. A significant relationship between reflex thresholds and static compliance was found in the adult data, showing a trend for higher thresholds in ears with lower static compliance, but no such relationship was found in the data from the children. This study suggests that reflex measures in children should be adjusted for real-ear-to-coupler differences before interpretation. The data in children with suspected APD support previous studies suggesting abnormalities in reflex thresholds. The lack of correlation between threshold and static compliance estimates in children as was observed in the adults may suggest a nonmechanical explanation for age and clinically related effects. American Academy of Audiology
Real-Time Simulation of the X-33 Aerospace Engine
NASA Technical Reports Server (NTRS)
Aguilar, Robert
1999-01-01
This paper discusses the development and performance of the X-33 Aerospike Engine RealTime Model. This model was developed for the purposes of control law development, six degree-of-freedom trajectory analysis, vehicle system integration testing, and hardware-in-the loop controller verification. The Real-Time Model uses time-step marching solution of non-linear differential equations representing the physical processes involved in the operation of a liquid propellant rocket engine, albeit in a simplified form. These processes include heat transfer, fluid dynamics, combustion, and turbomachine performance. Two engine models are typically employed in order to accurately model maneuvering and the powerpack-out condition where the power section of one engine is used to supply propellants to both engines if one engine malfunctions. The X-33 Real-Time Model is compared to actual hot fire test data and is been found to be in good agreement.
Knowledge Translation for Cardiovascular Disease Research and Management in Japan
Shommu, Nusrat S
2017-01-01
Knowledge translation is an essential and emerging arena in healthcare research. It is the process of aiding the application of research knowledge into clinical practice or policymaking. Individuals at all levels of the health care system, including patients, healthcare professionals, and policymakers, are affected by the gaps that exist between research evidence and practice; the process of knowledge translation plays a role in bridging these gaps and incorporating high-quality clinical research into decision-making. Cardiovascular disease (CVD) management is a crucial area of healthcare where information gaps are known to exist. Although Japan has one of the lowest risks and mortality rates from CVDs, an increasing trend of cardiovascular incidence and changes in the risk factor conditions have been observed in recent years. This article provides an overview of knowledge translation and its importance in the cardiovascular health of the Japanese population, and describes the key steps of a typical knowledge translation strategy. PMID:28757537
Confessions of a robot lobotomist
NASA Technical Reports Server (NTRS)
Gottshall, R. Marc
1994-01-01
Since its inception, numerically controlled (NC) machining methods have been used throughout the aerospace industry to mill, drill, and turn complex shapes by sequentially stepping through motion programs. However, the recent demand for more precision, faster feeds, exotic sensors, and branching execution have existing computer numerical control (CNC) and distributed numerical control (DNC) systems running at maximum controller capacity. Typical disadvantages of current CNC's include fixed memory capacities, limited communication ports, and the use of multiple control languages. The need to tailor CNC's to meet specific applications, whether it be expanded memory, additional communications, or integrated vision, often requires replacing the original controller supplied with the commercial machine tool with a more powerful and capable system. This paper briefly describes the process and equipment requirements for new controllers and their evolutionary implementation in an aerospace environment. The process of controller retrofit with currently available machines is examined, along with several case studies and their computational and architectural implications.
Wojtusik, Mateusz; Zurita, Mauricio; Villar, Juan C; Ladero, Miguel; Garcia-Ochoa, Felix
2016-09-01
The effect of fluid dynamic conditions on enzymatic hydrolysis of acid pretreated corn stover (PCS) has been assessed. Runs were performed in stirred tanks at several stirrer speed values, under typical conditions of temperature (50°C), pH (4.8) and solid charge (20% w/w). A complex mixture of cellulases, xylanases and mannanases was employed for PCS saccharification. At low stirring speeds (<150rpm), estimated mass transfer coefficients and rates, when compared to chemical hydrolysis rates, lead to results that clearly show low mass transfer rates, being this phenomenon the controlling step of the overall process rate. However, for stirrer speed from 300rpm upwards, the overall process rate is controlled by hydrolysis reactions. The ratio between mass transfer and overall chemical reaction rates changes with time depending on the conditions of each run. Copyright © 2016 Elsevier Ltd. All rights reserved.
Small molecule compound logistics outsourcing--going beyond the "thought experiment".
Ramsay, Devon L; Kwasnoski, Joseph D; Caldwell, Gary W
2012-01-01
Increasing pressure on the pharmaceutical industry to reduce cost and focus internal resources on "high value" activities is driving a trend to outsource traditionally "in-house" drug discovery activities. Compound collections are typically viewed as drug discovery's "crown jewels"; however, in late 2007, Johnson & Johnson Pharmaceutical Research & Development (J PRD) took a bold step to move their entire North American compound inventory and processing capability to an external third party vendor. The authors discuss the combination model implemented, that of local compound logistics site support with an outsourced centralized processing center. Some of the lessons learned over the past five years were predictable while others were unexpected. The substantial cost savings, improved local service response and flexible platform to adjust to changing business needs resulted. Continued sustainable success relies heavily upon maintaining internal headcount dedicated to vendor management, an open collaboration approach and a solid information technology infrastructure with complete transparency and visibility.
Knowledge Translation for Cardiovascular Disease Research and Management in Japan.
Shommu, Nusrat S; Turin, Tanvir C
2017-09-01
Knowledge translation is an essential and emerging arena in healthcare research. It is the process of aiding the application of research knowledge into clinical practice or policymaking. Individuals at all levels of the health care system, including patients, healthcare professionals, and policymakers, are affected by the gaps that exist between research evidence and practice; the process of knowledge translation plays a role in bridging these gaps and incorporating high-quality clinical research into decision-making. Cardiovascular disease (CVD) management is a crucial area of healthcare where information gaps are known to exist. Although Japan has one of the lowest risks and mortality rates from CVDs, an increasing trend of cardiovascular incidence and changes in the risk factor conditions have been observed in recent years. This article provides an overview of knowledge translation and its importance in the cardiovascular health of the Japanese population, and describes the key steps of a typical knowledge translation strategy.
Convergence and Extrusion Are Required for Normal Fusion of the Mammalian Secondary Palate
Kim, Seungil; Lewis, Ace E.; Singh, Vivek; Ma, Xuefei; Adelstein, Robert; Bush, Jeffrey O.
2015-01-01
The fusion of two distinct prominences into one continuous structure is common during development and typically requires integration of two epithelia and subsequent removal of that intervening epithelium. Using confocal live imaging, we directly observed the cellular processes underlying tissue fusion, using the secondary palatal shelves as a model. We find that convergence of a multi-layered epithelium into a single-layer epithelium is an essential early step, driven by cell intercalation, and is concurrent to orthogonal cell displacement and epithelial cell extrusion. Functional studies in mice indicate that this process requires an actomyosin contractility pathway involving Rho kinase (ROCK) and myosin light chain kinase (MLCK), culminating in the activation of non-muscle myosin IIA (NMIIA). Together, these data indicate that actomyosin contractility drives cell intercalation and cell extrusion during palate fusion and suggest a general mechanism for tissue fusion in development. PMID:25848986
Early phase drug discovery: cheminformatics and computational techniques in identifying lead series.
Duffy, Bryan C; Zhu, Lei; Decornez, Hélène; Kitchen, Douglas B
2012-09-15
Early drug discovery processes rely on hit finding procedures followed by extensive experimental confirmation in order to select high priority hit series which then undergo further scrutiny in hit-to-lead studies. The experimental cost and the risk associated with poor selection of lead series can be greatly reduced by the use of many different computational and cheminformatic techniques to sort and prioritize compounds. We describe the steps in typical hit identification and hit-to-lead programs and then describe how cheminformatic analysis assists this process. In particular, scaffold analysis, clustering and property calculations assist in the design of high-throughput screening libraries, the early analysis of hits and then organizing compounds into series for their progression from hits to leads. Additionally, these computational tools can be used in virtual screening to design hit-finding libraries and as procedures to help with early SAR exploration. Copyright © 2012 Elsevier Ltd. All rights reserved.
Development of processes for the production of low cost silicon dendritic web for solar cells
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Skutch, M. E.; Driggers, J. M.; Hill, F. E.
1980-01-01
High area output rates and continuous, automated growth are two key technical requirements for the growth of low-cost silicon ribbons for solar cells. By means of computer-aided furnace design, silicon dendritic web output rates as high as 27 sq cm/min have been achieved, a value in excess of that projected to meet a $0.50 per peak watt solar array manufacturing cost. The feasibility of simultaneous web growth while the melt is replenished with pelletized silicon has also been demonstrated. This step is an important precursor to the development of an automated growth system. Solar cells made on the replenished material were just as efficient as devices fabricated on typical webs grown without replenishment. Moreover, web cells made on a less-refined, pelletized polycrystalline silicon synthesized by the Battelle process yielded efficiencies up to 13% (AM1).
Faster search by lackadaisical quantum walk
NASA Astrophysics Data System (ADS)
Wong, Thomas G.
2018-03-01
In the typical model, a discrete-time coined quantum walk searching the 2D grid for a marked vertex achieves a success probability of O(1/log N) in O(√{N log N}) steps, which with amplitude amplification yields an overall runtime of O(√{N} log N). We show that making the quantum walk lackadaisical or lazy by adding a self-loop of weight 4 / N to each vertex speeds up the search, causing the success probability to reach a constant near 1 in O(√{N log N}) steps, thus yielding an O(√{log N}) improvement over the typical, loopless algorithm. This improved runtime matches the best known quantum algorithms for this search problem. Our results are based on numerical simulations since the algorithm is not an instance of the abstract search algorithm.
Design of a reliable and operational landslide early warning system at regional scale
NASA Astrophysics Data System (ADS)
Calvello, Michele; Piciullo, Luca; Gariano, Stefano Luigi; Melillo, Massimo; Brunetti, Maria Teresa; Peruccacci, Silvia; Guzzetti, Fausto
2017-04-01
Landslide early warning systems at regional scale are used to warn authorities, civil protection personnel and the population about the occurrence of rainfall-induced landslides over wide areas, typically through the prediction and measurement of meteorological variables. A warning model for these systems must include a regional correlation law and a decision algorithm. A regional correlation law can be defined as a functional relationship between rainfall and landslides; it is typically based on thresholds of rainfall indicators (e.g., cumulated rainfall, rainfall duration) related to different exceedance probabilities of landslide occurrence. A decision algorithm can be defined as a set of assumptions and procedures linking rainfall thresholds to warning levels. The design and the employment of an operational and reliable early warning system for rainfall-induced landslides at regional scale depend on the identification of a reliable correlation law as well as on the definition of a suitable decision algorithm. Herein, a five-step process chain addressing both issues and based on rainfall thresholds is proposed; the procedure is tested in a landslide-prone area of the Campania region in southern Italy. To this purpose, a database of 96 shallow landslides triggered by rainfall in the period 2003-2010 and rainfall data gathered from 58 rain gauges are used. First, a set of rainfall thresholds are defined applying a frequentist method to reconstructed rainfall conditions triggering landslides in the test area. In the second step, several thresholds at different exceedance probabilities are evaluated, and different percentile combinations are selected for the activation of three warning levels. Subsequently, within steps three and four, the issuing of warning levels is based on the comparison, over time and for each combination, between the measured rainfall and the pre-defined warning level thresholds. Finally, the optimal percentile combination to be employed in the regional early warning system is selected evaluating the model performance in terms of success and error indicators by means of the "event, duration matrix, performance" (EDuMaP) method.
Fast and flexible 3D object recognition solutions for machine vision applications
NASA Astrophysics Data System (ADS)
Effenberger, Ira; Kühnle, Jens; Verl, Alexander
2013-03-01
In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.
Applying the Theory of Constraints to a Base Civil Engineering Operations Branch
1991-09-01
Figure Page 1. Typical Work Order Processing . .......... 7 2. Typical Job Order Processing . .......... 8 3. Typical Simplified In-Service Work Plan for...Customers’ Customer Request Service Planning Unit Production] Control Center Material Control Scheduling CE Shops Figure 1.. Typical Work Order Processing 7
Greenspoon, Susan A; Ban, Jeffrey D; Sykes, Karen; Ballard, Elizabeth J; Edler, Shelley S; Baisden, Melissa; Covington, Brian L
2004-01-01
Robotic systems are commonly utilized for the extraction of database samples. However, the application of robotic extraction to forensic casework samples is a more daunting task. Such a system must be versatile enough to accommodate a wide range of samples that may contain greatly varying amounts of DNA, but it must also pose no more risk of contamination than the manual DNA extraction methods. This study demonstrates that the BioMek 2000 Laboratory Automation Workstation, used in combination with the DNA IQ System, is versatile enough to accommodate the wide range of samples typically encountered by a crime laboratory. The use of a silica coated paramagnetic resin, as with the DNA IQ System, facilitates the adaptation of an open well, hands off, robotic system to the extraction of casework samples since no filtration or centrifugation steps are needed. Moreover, the DNA remains tightly coupled to the silica coated paramagnetic resin for the entire process until the elution step. A short pre-extraction incubation step is necessary prior to loading samples onto the robot and it is at this step that most modifications are made to accommodate the different sample types and substrates commonly encountered with forensic evidentiary samples. Sexual assault (mixed stain) samples, cigarette butts, blood stains, buccal swabs, and various tissue samples were successfully extracted with the BioMek 2000 Laboratory Automation Workstation and the DNA IQ System, with no evidence of contamination throughout the extensive validation studies reported here.
Liu, Benmei; Yu, Mandi; Graubard, Barry I; Troiano, Richard P; Schenker, Nathaniel
2016-01-01
The Physical Activity Monitor (PAM) component was introduced into the 2003-2004 National Health and Nutrition Examination Survey (NHANES) to collect objective information on physical activity including both movement intensity counts and ambulatory steps. Due to an error in the accelerometer device initialization process, the steps data were missing for all participants in several primary sampling units (PSUs), typically a single county or group of contiguous counties, who had intensity count data from their accelerometers. To avoid potential bias and loss in efficiency in estimation and inference involving the steps data, we considered methods to accurately impute the missing values for steps collected in the 2003-2004 NHANES. The objective was to come up with an efficient imputation method which minimized model-based assumptions. We adopted a multiple imputation approach based on Additive Regression, Bootstrapping and Predictive mean matching (ARBP) methods. This method fits alternative conditional expectation (ace) models, which use an automated procedure to estimate optimal transformations for both the predictor and response variables. This paper describes the approaches used in this imputation and evaluates the methods by comparing the distributions of the original and the imputed data. A simulation study using the observed data is also conducted as part of the model diagnostics. Finally some real data analyses are performed to compare the before and after imputation results. PMID:27488606
Xue, Runmiao; Donovan, Ariel; Zhang, Haiting; Ma, Yinfa; Adams, Craig; Yang, John; Hua, Bin; Inniss, Enos; Eichholz, Todd; Shi, Honglan
2018-02-01
When adding sufficient chlorine to achieve breakpoint chlorination to source water containing high concentration of ammonia during drinking water treatment, high concentrations of disinfection by-products (DBPs) may form. If N-nitrosamine precursors are present, highly toxic N-nitrosamines, primarily N-nitrosodimethylamine (NDMA), may also form. Removing their precursors before disinfection should be a more effective way to minimize these DBPs formation. In this study, zeolites and activated carbon were examined for ammonia and N-nitrosamine precursor removal when incorporated into drinking water treatment processes. The test results indicate that Mordenite zeolite can remove ammonia and five of seven N-nitrosamine precursors efficiently by single step adsorption test. The practical applicability was evaluated by simulation of typical drinking water treatment processes using six-gang stirring system. The Mordenite zeolite was applied at the steps of lime softening, alum coagulation, and alum coagulation with powdered activated carbon (PAC) sorption. While the lime softening process resulted in poor zeolite performance, alum coagulation did not impact ammonia and N-nitrosamine precursor removal. During alum coagulation, more than 67% ammonia and 70%-100% N-nitrosamine precursors were removed by Mordenite zeolite (except 3-(dimethylaminomethyl)indole (DMAI) and 4-dimethylaminoantipyrine (DMAP)). PAC effectively removed DMAI and DMAP when added during alum coagulation. A combination of the zeolite and PAC selected efficiently removed ammonia and all tested seven N-nitrosamine precursors (dimethylamine (DMA), ethylmethylamine (EMA), diethylamine (DEA), dipropylamine (DPA), trimethylamine (TMA), DMAP, and DMAI) during the alum coagulation process. Copyright © 2017. Published by Elsevier B.V.
Press-hardening of zinc coated steel - characterization of a new material for a new process
NASA Astrophysics Data System (ADS)
Kurz, T.; Larour, P.; Lackner, J.; Steck, T.; Jesner, G.
2016-11-01
Press-hardening of zinc-coated PHS has been limited to the indirect process until a pre-cooling step was introduced before the hot forming to prevent liquid metal embrittlement. Even though that's only a minor change in the process itself it does not only eliminate LME, but increases also the demands on the base material especially in terms of hardenability or phase transformations at temperatures below 700 °C in general. This paper deals with the characterization of a modified zinc-coated material for press-hardening with pre-cooling that assures a robust process. The pre-cooling step itself and especially the transfer of the blank in the hot-forming die is more demanding than the standard 22MnB5 can stand to ensure full hardenability. Therefore the transformation behavior of the modified material is shown in CCT and TTT diagrams. Of the same importance are the changed hot forming temperature and flow curves for material at lower temperatures than typically used in direct hot forming. The resulting mechanical properties after hardening from tensile testing and bending tests are shown in detail. Finally some results from side impact crash tests and correlations of the findings with mechanical properties such as fracture elongation, tensile strength, VDA238 bending angle at maximum force as well as postuniform bending slope are given as well. Fracture elongation is shown to be of little help for damage prediction in side impact crash. Tensile strength and VDA bending properties enable however some accurate prediction of the PHS final damage behavior in bending dominated side impact load case.
Waldner, M H; Halter, R; Sigg, A; Brosch, B; Gehrmann, H J; Keunecke, M
2013-02-01
Traditionally EfW (Energy from Waste) plants apply a reciprocating grate to combust waste fuel. An integrated steam generator recovers the heat of combustion and converts it to steam for use in a steam turbine/generator set. This is followed by an array of flue gas cleaning technologies to meet regulatory limitations. Modern combustion applies a two-step method using primary air to fuel the combustion process on the grate. This generates a complex mixture of pyrolysis gases, combustion gases and unused combustion air. The post-combustion step in the first pass of the boiler above the grate is intended to "clean up" this mixture by oxidizing unburned gases with secondary air. This paper describes modifications to the combustion process to minimize exhaust gas volumes and the generation of noxious gases and thus improving the overall thermal efficiency of the EfW plant. The resulting process can be coupled with an innovative SNCR (Selective Non-Catalytic Reduction) technology to form a clean and efficient solid waste combustion system. Measurements immediately above the grate show that gas compositions along the grate vary from 10% CO, 5% H(2) and 0% O(2) to essentially unused "pure" air, in good agreement with results from a mathematical model. Introducing these diverse gas compositions to the post combustion process will overwhelm its ability to process all these gas fractions in an optimal manner. Inserting an intermediate step aimed at homogenizing the mixture above the grate has shown to significantly improve the quality of combustion, allowing for optimized process parameters. These measures also resulted in reduced formation of NO(x) (nitrogenous oxides) due to a lower oxygen level at which the combustion process was run (2.6 vol% O(2,)(wet) instead of 6.0 vol% O(2,)(wet)). This reduction establishes optimal conditions for the DyNOR™ (Dynamic NO(x) Reduction) NO(x) reduction process. This innovative SNCR technology is adapted to situations typically encountered in solid fuel combustion. DyNOR™ measures temperature in small furnace segments and delivers the reducing reagent to the exact location where it is most effective. The DyNOR™ distributor reacts precisely and dynamically to rapid changes in combustion conditions, resulting in very low NO(x) emissions from the stack. Copyright © 2012 Elsevier Ltd. All rights reserved.
Planning for and surviving a BCM audit.
Freestone, Mandy; Lee, Michael
2008-01-01
Business continuity management (BCM) is moving progressively higher up the agendas of boardroom executives due to growing regulator, insurer and investor interest in risk management and BCM activity. With increasing pressure across all sectors, BCM has become an integral part of any effective corporate governance framework. Boardroom executives and senior management are thus now expected to provide an appropriate level of business continuity preparedness to better protect shareholder, investor and other stakeholder interests. The purpose of this paper is to build a link across the 'chasm' that separates the auditee from the auditor. The paper attempts to illuminate understanding about the process undertaken by an auditor when reviewing the BCM process. It details the steps the BCM auditor typically undertakes, and provides practical guidance as to the types of documentation and other supporting evidence required during the process. Additionally, the paper attempts to dispel commonly-held misconceptions about the BCM audit process. Executives, senior management and BCM practitioners will all benefit from the practical guidance offered in this paper, to assist in planning for and surviving a BCM audit.
Low activation steels welding with PWHT and coating for ITER test blanket modules and DEMO
NASA Astrophysics Data System (ADS)
Aubert, P.; Tavassoli, F.; Rieth, M.; Diegele, E.; Poitevin, Y.
2011-02-01
EUROFER weldability is investigated in support of the European material properties database and TBM manufacturing. Electron Beam, Hybrid, laser and narrow gap TIG processes have been carried out on the EUROFER-97 steel (thickness up to 40 mm), a reduced activation ferritic-martensitic steel developed in Europe. These welding processes produce similar welding results with high joint coefficients and are well adapted for minimizing residual distortions. The fusion zones are typically composed of martensite laths, with small grain sizes. In the heat-affected zones, martensite grains contain carbide precipitates. High hardness values are measured in all these zones that if not tempered would degrade toughness and creep resistance. PWHT developments have driven to a one-step PWHT (750 °C/3 h), successfully applied to joints restoring good material performances. It will produce less distortion levels than a full austenitization PWHT process, not really applicable to a complex welded structure such as the TBM. Different tungsten coatings have been successfully processed on EUROFER material. It has shown no really effect on the EUROFER base material microstructure.
Boubela, Roland N.; Kalcher, Klaudius; Huf, Wolfgang; Našel, Christian; Moser, Ewald
2016-01-01
Technologies for scalable analysis of very large datasets have emerged in the domain of internet computing, but are still rarely used in neuroimaging despite the existence of data and research questions in need of efficient computation tools especially in fMRI. In this work, we present software tools for the application of Apache Spark and Graphics Processing Units (GPUs) to neuroimaging datasets, in particular providing distributed file input for 4D NIfTI fMRI datasets in Scala for use in an Apache Spark environment. Examples for using this Big Data platform in graph analysis of fMRI datasets are shown to illustrate how processing pipelines employing it can be developed. With more tools for the convenient integration of neuroimaging file formats and typical processing steps, big data technologies could find wider endorsement in the community, leading to a range of potentially useful applications especially in view of the current collaborative creation of a wealth of large data repositories including thousands of individual fMRI datasets. PMID:26778951
Infrared thermography of welding zones produced by polymer extrusion additive manufacturing✩
Seppala, Jonathan E.; Migler, Kalman D.
2016-01-01
In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters. PMID:29167755
New Ground Truth Capability from InSAR Time Series Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, S; Vincent, P; Yang, D
2005-07-13
We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing newmore » ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.« less
Infrared thermography of welding zones produced by polymer extrusion additive manufacturing.
Seppala, Jonathan E; Migler, Kalman D
2016-10-01
In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters.
The Influence of Task Complexity on Knee Joint Kinetics Following ACL Reconstruction
Schroeder, Megan J.; Krishnan, Chandramouli; Dhaher, Yasin Y.
2015-01-01
Background Previous research indicates that subjects with anterior cruciate ligament reconstruction exhibit abnormal knee joint movement patterns during functional activities like walking. While the sagittal plane mechanics have been studied extensively, less is known about the secondary planes, specifically with regard to more demanding tasks. This study explored the influence of task complexity on functional joint mechanics in the context of graft-specific surgeries. Methods In 25 participants (10 hamstring tendon graft, 6 patellar tendon graft, 9 matched controls), three-dimensional joint torques were calculated using a standard inverse dynamics approach during level walking and stair descent. The stair descent task was separated into two functionally different sub-tasks—step-to-floor and step-to-step. The differences in external knee moment profiles were compared between groups; paired differences between the reconstructed and non-reconstructed knees were also assessed. Findings The reconstructed knees, irrespective of graft type, typically exhibited significantly lower peak knee flexion moments compared to control knees during stair descent, with the differences more pronounced in the step-to-step task. Frontal plane adduction torque deficits were graft-specific and limited to the hamstring tendon knees during the step-to-step task. Internal rotation torque deficits were also primarily limited to the hamstring tendon graft group during stair descent. Collectively, these results suggest that task complexity was a primary driver of differences in joint mechanics between anterior cruciate ligament reconstructed individuals and controls, and such differences were more pronounced in individuals with hamstring tendon grafts. Interpretation The mechanical environment experienced in the cartilage during repetitive, cyclical tasks such as walking and other activities of daily living has been argued to contribute to the development of degenerative changes to the joint and ultimately osteoarthritis. Given the task-specific and graft-specific differences in joint mechanics detected in this study, care should be taken during the rehabilitation process to mitigate these changes. PMID:26101055
NASA Astrophysics Data System (ADS)
Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.
2013-10-01
In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.
QUICR-learning for Multi-Agent Coordination
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2006-01-01
Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.
Rapid Generation of Large Dimension Photon Sieve Designs
NASA Technical Reports Server (NTRS)
Hariharan, Shravan; Fitzpatrick, Sean; Kim, Hyun Jung; Julian, Matthew; Sun, Wenbo; Tedjojuwono, Ken; MacDonnell, David
2017-01-01
A photon sieve is a revolutionary optical instrument that provides high resolution imaging at a fraction of the weight of typical telescopes (areal density of 0.3 kg/m2 compared to 25 kg/m2 for the James Webb Space Telescope). The photon sieve is a variation of a Fresnel Zone Plate consisting of many small holes spread out in a ring-like pattern, which focuses light of a specific wavelength by diffraction. The team at NASA Langley Research Center has produced a variety of small photon sieves for testing. However, it is necessary to increase both the scale and rate of production, as a single sieve previously took multiple weeks to design and fabricate. This report details the different methods used in producing photon sieve designs in two file formats: CIF and DXF. The difference between these methods, and the two file formats were compared, to determine the most efficient design process. Finally, a step-by-step sieve design and fabrication process was described. The design files can be generated in both formats using an editing tool such as Microsoft Excel. However, an approach using a MATLAB program reduced the computing time of the designs and increased the ability of the user to generate large photon sieve designs. Although the CIF generation process was deemed the most efficient, the design techniques for both file types have been proven to generate complete photon sieves that can be used for scientific applications
McClements, David Julian
2017-02-01
Biopolymer microgels have considerable potential for their ability to encapsulate, protect, and release bioactive components. Biopolymer microgels are small particles (typically 100nm to 1000μm) whose interior consists of a three-dimensional network of cross-linked biopolymer molecules that traps a considerable amount of solvent. This type of particle is also sometimes referred to as a nanogel, hydrogel bead, biopolymer particles, or microsphere. Biopolymer microgels are typically prepared using a two-step process involving particle formation and particle gelation. This article reviews the major constituents and fabrication methods that can be used to prepare microgels, highlighting their advantages and disadvantages. It then provides an overview of the most important characteristics of microgel particles (such as size, shape, structure, composition, and electrical properties), and describes how these parameters can be manipulated to control the physicochemical properties and functional attributes of microgel suspensions (such as appearance, stability, rheology, and release profiles). Finally, recent examples of the utilization of biopolymer microgels to encapsulate, protect, or release bioactive agents, such as pharmaceuticals, nutraceuticals, enzymes, flavors, and probiotics is given. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Siniscalchi, Agata; Romano, Gerardo; Barracano, Fabio; Balasco, Marianna; Tripaldi, Simona
2017-04-01
Analyzing a 4 years of a single site MT continuous monitoring data, a systematic variation of the MT transfer function estimates was observed in the [20-100 s] period range that was shown to be connected to the global geomagnetic activity, Ap index (Romano et al., 2014). The monitored period, from 2007 to 2011, includes the global minimum of solar activity which occurred in 2009 (low MT source amplitude). It was shown that the impedance robust estimations tend to stabilize when the Ap index exceed a value of 10. In order to exclude a possible dependence of the observed fluctuation on the presence of a local cultural noise source, for a shorter period ( 2 months) the monitoring data were also processed by using a remote site. Recently Chave (2012) demonstrated that MT data can be described by alpha stable distribution family that is characterized by four-parameters that must be empirically determined. The Gaussian distribution belongs to this family as a special case when one of the four parameter, α the tail thickness, is equal to 2. Following Chave (2016), MT data are typically stably distributed with the empirical observation that 0.8 ≤α ≤1.8. In order to better understand the observed dependence of the MT continuous monitoring on the global geomagnetic activity, here we present the results a re-analysis of the MT monitoring data with a two steps processing. In the first step, we characterize the time series of the Alpha Stable Distribution Parameters (ASDP) as obtained from the whole processing of the dataset with the aim of checking for possible connections between these last and the Ap index. In the second step, we estimate the ASDP by using only the samples which satisfy the mathematical range of existence of the normalized WAL (Weaver et al.,2000) considering these last as a diagnostic tool to detect which segments of the time series in the frequency domain are strongly contaminated by noise (WAL selection criterion). The comparison between the results of the two above mentioned steps, allow us to understand how the WAL based selection criterion performs.
Experimental and numerical analysis of interlocking rib formation at sheet metal blanking
NASA Astrophysics Data System (ADS)
Bolka, Špela; Bratuš, Vitoslav; Starman, Bojan; Mole, Nikolaj
2018-05-01
Cores for electrical motors are typically produced by blanking of laminations and then stacking them together, with, for instance, interlocking ribs or welding. Strict geometrical tolerances, both on the lamination and on the stack, combined with complex part geometry and harder steel strip material, call for use of predictive methods to optimize the process before actual blanking to reduce the costs and speed up the process. One of the major influences on the final stack geometry is the quality of the interlocking ribs. A rib is formed in one step and joined with the rib of the preceding lamination in the next. The quality of the joint determines the firmness of the stack and also influences its. The geometrical and positional accuracy is thus crucial in rib formation process. In this study, a complex experimental and numerical analysis of interlocking rib formation has been performed. The aim of the analysis is to numerically predict the shape of the rib in order to perform a numerical simulation of the stack formation in the next step of the process. A detailed experimental research has been performed in order to characterize influential parameters on the rib formation and the geometry of the ribs itself, using classical and 3D laser microscopy. The formation of the interlocking rib is then simulated using Abaqus Explicit. The Hilll 48 constitutive material model is based on extensive and novel material characterization process, combining data from in-plane and out-of-plane material tests to perform a 3D analysis of both, rib formation and rib joining. The study shows good correlation between the experimental and numerical results.
NASA Astrophysics Data System (ADS)
Wagemans, Johan
2017-07-01
Matthew Pelowski and his colleagues from the Helmut Leder lab [17] have made a remarkable contribution to the field of art perception by reviewing the extensive and varied literature (+300 references) on all the factors involved, from a coherent, synthetic perspective-The Vienna Integrated Model of top-down and bottom-up processes in Art Perception (VIMAP). VIMAP builds on earlier attempts from the same group to provide a comprehensive theoretical framework, but it is much wider in scope and richer in the number of levels and topics covered under its umbrella. It is particularly strong in its discussion of the different psychological processes that lead to a wide range of possible responses to art-from mundane, superficial reactions to more profound responses characterized as moving, disturbing, and transformative. By including physiological, emotional, and evaluative factors, the model is able to address truly unique, even intimate responses to art such as awe, chills, thrills, and the experience of the sublime. The unique way in which this rich set of possible responses to art is achieved is through a series of five mandatory consecutive processing steps (each with their own typical duration), followed by two conditional additional steps (which take more time). Three processing checks along this cascade lead to three more or less spontaneous outcomes (<60 sec) and two more time-consuming ones (see their Fig. 1 for an excellent overview). I have no doubt that VIMAP will inspire a whole generation of scientists investigating perception and appreciation of art, testing specific hypotheses derived from this framework for decades to come.
Stepped-to-dart Leaders in Cloud-to-ground Lightning
NASA Astrophysics Data System (ADS)
Stolzenburg, M.; Marshall, T. C.; Karunarathne, S.; Karunarathna, N.; Warner, T.; Orville, R. E.
2013-12-01
Using time-correlated high-speed video (50,000 frames per second) and fast electric field change (5 MegaSamples per second) data for lightning flashes in East-central Florida, we describe an apparently rare type of subsequent leader: a stepped leader that finds and follows a previously used channel. The observed 'stepped-to-dart leaders' occur in three natural negative ground flashes. Stepped-to-dart leader connection altitudes are 3.3, 1.6 and 0.7 km above ground in the three cases. Prior to the stepped-to-dart connection, the advancing leaders have properties typical of stepped leaders. After the connection, the behavior changes almost immediately (within 40-60 us) to dart or dart-stepped leader, with larger amplitude E-change pulses and faster average propagation speeds. In this presentation, we will also describe the upward luminosity after the connection in the prior return stroke channel and in the stepped leader path, along with properties of the return strokes and other leaders in the three flashes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruce J. Mincher; Guiseppe Modolo; Strephen P. Mezyk
2009-01-01
Solvent extraction is the most commonly used process scale separation technique for nuclear applications and it benefits from more than 60 years of research and development and proven experience at the industrial scale. Advanced solvent extraction processes for the separation of actinides and fission products from dissolved nuclear fuel are now being investigated worldwide by numerous groups (US, Europe, Russia, Japan etc.) in order to decrease the radiotoxic inventories of nuclear waste. While none of the advanced processes have yet been implemented at the industrial scale their development studies have sometimes reached demonstration tests at the laboratory scale. Most ofmore » the partitioning strategies rely on the following four separations: 1. Partitioning of uranium and/or plutonium from spent fuel dissolution liquors. 2. Separation of the heat generating fission products such as strontium and cesium. 3. Coextraction of the trivalent actinides and lanthanides. 4. Separation of the trivalent actinides from the trivalent lanthanides. Tributylphosphate (TBP) in the first separation is the basis of the PUREX, UREX and COEX processes, developed in Europe and the US, whereas monoamides as alternatives for TBP are being developed in Japan and India. For the second separation, many processes were developed worldwide, including the use of crown-ether extractants, like the FPEX process developed in the USA, and the CCD-PEG process jointly developed in the USA and Russia for the partitioning of cesium and strontium. In the third separation, phosphine oxides (CMPOs), malonamides, and diglycolamides are used in the TRUEX, DIAMEX and the ARTIST processes, respectively developed in US, Europe and Japan. Trialkylphosphine oxide(TRPO) developed in China, or UNEX (a mixture of several extractants) jointly developed in Russia and the USA allow all actinides to be co-extracted from acidic radioactive liquid waste. For the final separation, soft donor atom-containing ligands such as the bistriazinylbipyridines (BTBPs) or dithiophosphinic acids have been developed in Europe and China to selectively extract the trivalent actinides. However, in the TALSPEAK process developed in the USA, the separation is based on the relatively high affinity of aminopolycarboxylic acid complexants such as DTPA for trivalent actinides over lanthanides. In the DIDPA, SETFICS and the GANEX processes, developed in Japan and France, the group separation is accomplished in a reverse TALSPEAK process. A typical scenario is shown in Figure 1 for the UREX1a (Uranium Extraction version 1a) process. The initial step is the TBP extraction for the separation of recyclable uranium. The second step partitions the short-lived, highly radioactive cesium and strontium to minimize heat loading in the high-level waste repository. The third step is a group separation of the trivalent actinides and lanthanides with the last step being partitioning of the trivalent lanthanides from the actinides.« less
NASA Astrophysics Data System (ADS)
Hendrik; Sebleku, P.; Siswayanti, B.; Pramono, A. W.
2017-05-01
The manufacture of high critical temperature (Tc) Bi, Pb-Sr-Ca-Cu-O (HTS BPSCCO) superconductor wire fabricated by power-in-tube (PIT) is a multi-step process. The main difficulty is that the value of Tc superconductor wire determined by various factors for each step. The objective of this research is to investigate the effect of sintering parameters on the properties of final rolled material. The fabrication process of 1 m rolled-silver sheath monofilament superconductor BPSCCO wire using mechanical deformation process including rolling and drawing has been carried out. The pure silver powders were melted and formed into pure silver (Ag) tube. The tube was 10 mm in diameter with a sheath material: superconductor powders ratio of about 6 : 1. Starting powders, containing the nominal composition of Bi2-Sr2-Cam-1-Cum-Oy, were inserted into the pure silver tube and rolled until it reached a diameter of 4 mm. A typical area reduction ratio of about 5% per step has been proposed to prevent microcracking during the cold-drawing process. The process of rolling of the silver tube was subsequently repeated to obtain three samples and then followed by heat-treated at 820 °C, 840 °C, and 860 °C, respectively. The surface morphology was analyzed by using SEM; the crystal structure was studied by using X-RD, whereas the superconductivity was investigated by using temperature dependence resistivity measurement by using four-point probe technique. SEM images showed the porosity of the cross-sectional surface of the samples. The sample with low heating temperature showed porosity more than the one with high temperature. The value of critical temperature (Tc) of the sample with a dwelling time of heating of 8 hours is 70 K. At above 70 K, it shows the behavior of conductor properties. However, the porosity increased as the heating time increased up to 24 hours. The critical temperature was difficult to be identified due to its porosity. According to XRD results, the Bi-2212 phase is prominent in all samples.
NASA Astrophysics Data System (ADS)
Uunk, Bertram; Brouwer, Fraukje; ter Voorde, Marlies; Wijbrans, Jan
2018-02-01
The preservation of 40Ar/39Ar ages of high pressure (HP) metamorphic white mica reflects an interplay of processes that mobilise 40Ar, either through mica recrystallisation or by diffusive 40Ar loss. The applicability of resulting ages for dating tectonic processes is critically dependent on whether either of these processes can be proven to be efficient and exclusively active in removing 40Ar from mica. If not, preservation of an inherited or mixed age signal in a sample must be considered for interpretation. The Cycladic Blueschist Unit on Syros has become a new focal area in the discussion of the geological significance of argon age results from multi-grain step heating experiments. While some argue that age results can directly be linked to deformation or metamorphic growth events, others interpret age results to reflect the interplay of protracted recrystallisation and partial resetting, preserving a mixed age signal. Here, we demonstrate the potential of a new approach of multiple single grain fusion dating. Using the distribution of ages at the sample, section and regional scale, we show that in Northern Syros mica ages display systematic trends that can be understood as the result of three competing processes: 1) crystallisation along the prograde to peak metamorphic path, 2) a southward trend of increasing 40Ar loss by diffusion and 3) localised and rock type dependent deformation or metamorphic reactions leading to an observed age spread typically limited to ∼10 Myr at the section scale. None of the sections yielded the anomalously old age results that would be diagnostic for significant excess 40Ar. The recorded trends in ages for each of the studied sections reflect a range of P-T conditions and duration of metamorphism. Diffusion modelling shows that in a typical subduction metamorphic loop, subtle variations in P-T-t history can explain that age contrasts occur on a regional scale but are limited on the outcrop scale. Our new approach provides a comprehensive inventory of the range of ages present in different rocks and at different scales, which results in a more refined understanding of argon retention and isotopic closure of phengite and the geological significance of the ages. We verify the added value of our new approach by comparison with multi-grain step heating experiments on selected samples from the same sections.
Process, including PSA and membrane separation, for separating hydrogen from hydrocarbons
Baker, Richard W.; Lokhandwala, Kaaeid A.; He, Zhenjie; Pinnau, Ingo
2001-01-01
An improved process for separating hydrogen from hydrocarbons. The process includes a pressure swing adsorption step, a compression/cooling step and a membrane separation step. The membrane step relies on achieving a methane/hydrogen selectivity of at least about 2.5 under the conditions of the process.
NASA Astrophysics Data System (ADS)
Yan, Li; Liao, Lei; Huang, Wei; Li, Lang-quan
2018-04-01
The analysis of nonlinear characteristics and control of mode transition process is the crucial issue to enhance the stability and reliability of the dual-mode scramjet engine. In the current study, the mode transition processes in both strut-based combustor and cavity-strut based combustor are numerically studied, and the influence of the cavity on the transition process is analyzed in detail. The simulations are conducted by means of the Reynolds averaged Navier-Stokes (RANS) equations coupled with the renormalization group (RNG) k-ε turbulence model and the single-step chemical reaction mechanism, and this numerical approach is proved to be valid by comparing the predicted results with the available experimental shadowgraphs in the open literature. During the mode transition process, an obvious nonlinear property is observed, namely the unevenly variations of pressure along the combustor. The hysteresis phenomenon is more obvious upstream of the flow field. For the cavity-strut configuration, the whole flow field is more inclined to the supersonic state during the transition process, and it is uneasy to convert to the ramjet mode. In the scram-to-ram transition process, the process would be more stable, and the hysteresis effect would be reduced in the ram-to-scram transition process.
Click It or Ticket Evaluation, 2011
DOT National Transportation Integrated Search
2013-05-01
The 2011 Click It or Ticket (CIOT) mobilization followed a typical selective traffic enforcement program (STEP) sequence, involving paid media, earned media, and enforcement. A nationally representative telephone survey indicated that the mobilizatio...
Janneck, Robby; Pilet, Nicolas; Bommanaboyena, Satya Prakash; Watts, Benjamin; Heremans, Paul; Genoe, Jan; Rolin, Cedric
2017-11-01
Highly crystalline thin films of organic semiconductors offer great potential for fundamental material studies as well as for realizing high-performance, low-cost flexible electronics. The fabrication of these films directly on inert substrates is typically done by meniscus-guided coating techniques. The resulting layers show morphological defects that hinder charge transport and induce large device-to-device variability. Here, a double-step method for organic semiconductor layers combining a solution-processed templating layer and a lateral homo-epitaxial growth by a thermal evaporation step is reported. The epitaxial regrowth repairs most of the morphological defects inherent to meniscus-guided coatings. The resulting film is highly crystalline and features a mobility increased by a factor of three and a relative spread in device characteristics improved by almost half an order of magnitude. This method is easily adaptable to other coating techniques and offers a route toward the fabrication of high-performance, large-area electronics based on highly crystalline thin films of organic semiconductors. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Weak partitioning chromatography for anion exchange purification of monoclonal antibodies.
Kelley, Brian D; Tobler, Scott A; Brown, Paul; Coffman, Jonathan L; Godavarti, Ranga; Iskra, Timothy; Switzer, Mary; Vunnum, Suresh
2008-10-15
Weak partitioning chromatography (WPC) is an isocratic chromatographic protein separation method performed under mobile phase conditions where a significant amount of the product protein binds to the resin, well in excess of typical flowthrough operations. The more stringent load and wash conditions lead to improved removal of more tightly binding impurities, although at the cost of a reduction in step yield. The step yield can be restored by extending the column load and incorporating a short wash at the end of the load stage. The use of WPC with anion exchange resins enables a two-column cGMP purification platform to be used for many different mAbs. The operating window for WPC can be easily established using high throughput batch-binding screens. Under conditions that favor very strong product binding, competitive effects from product binding can give rise to a reduction in column loading capacity. Robust performance of WPC anion exchange chromatography has been demonstrated in multiple cGMP mAb purification processes. Excellent clearance of host cell proteins, leached Protein A, DNA, high molecular weight species, and model virus has been achieved. (c) 2008 Wiley Periodicals, Inc.
Rhudy, Matthew B; Mahoney, Joseph M
2018-04-01
The goal of this work is to compare the differences between various step counting algorithms using both accelerometer and gyroscope measurements from wrist and ankle-mounted sensors. Participants completed four different conditions on a treadmill while wearing an accelerometer and gyroscope on the wrist and the ankle. Three different step counting techniques were applied to the data from each sensor type and mounting location. It was determined that using gyroscope measurements allowed for better performance than the typically used accelerometers, and that ankle-mounted sensors provided better performance than those mounted on the wrist.
Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock
2017-01-01
Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo
2014-09-01
We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.
Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Chadaram, Sudha; Mande, Sharmila S
2011-11-30
Obtaining accurate estimates of microbial diversity using rDNA profiling is the first step in most metagenomics projects. Consequently, most metagenomic projects spend considerable amounts of time, money and manpower for experimentally cloning, amplifying and sequencing the rDNA content in a metagenomic sample. In the second step, the entire genomic content of the metagenome is extracted, sequenced and analyzed. Since DNA sequences obtained in this second step also contain rDNA fragments, rapid in silico identification of these rDNA fragments would drastically reduce the cost, time and effort of current metagenomic projects by entirely bypassing the experimental steps of primer based rDNA amplification, cloning and sequencing. In this study, we present an algorithm called i-rDNA that can facilitate the rapid detection of 16S rDNA fragments from amongst millions of sequences in metagenomic data sets with high detection sensitivity. Performance evaluation with data sets/database variants simulating typical metagenomic scenarios indicates the significantly high detection sensitivity of i-rDNA. Moreover, i-rDNA can process a million sequences in less than an hour on a simple desktop with modest hardware specifications. In addition to the speed of execution, high sensitivity and low false positive rate, the utility of the algorithmic approach discussed in this paper is immense given that it would help in bypassing the entire experimental step of primer-based rDNA amplification, cloning and sequencing. Application of this algorithmic approach would thus drastically reduce the cost, time and human efforts invested in all metagenomic projects. A web-server for the i-rDNA algorithm is available at http://metagenomics.atc.tcs.com/i-rDNA/
Towards numerical prediction of cavitation erosion.
Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir
2015-10-06
This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s(-1)). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction.
Towards numerical prediction of cavitation erosion
Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir
2015-01-01
This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s−1). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction. PMID:26442139
Managing quality and compliance.
McNeil, Alice; Koppel, Carl
2015-01-01
Critical care nurses assume vital roles in maintaining patient care quality. There are distinct facets to the process including standard setting, regulatory compliance, and completion of reports associated with these endeavors. Typically, multiple niche software applications are required and user interfaces are varied and complex. Although there are distinct quality indicators that must be tracked as well as a list of serious or sentinel events that must be documented and reported, nurses may not know the precise steps to ensure that information is properly documented and actually reaches the proper authorities for further investigation and follow-up actions. Technology advances have permitted the evolution of a singular software platform, capable of monitoring quality indicators and managing all facets of reporting associated with regulatory compliance.
Seismic instrumentation of buildings
Çelebi, Mehmet
2000-01-01
The purpose of this report is to provide information on how and why we deploy seismic instruments in and around building structures. The recorded response data from buildings and other instrumented structures can be and are being primarily used to facilitate necessary studies to improve building codes and therefore reduce losses of life and property during damaging earthquakes. Other uses of such data can be in emergency response situations in large urban environments. The report discusses typical instrumentation schemes, existing instrumentation programs, the steps generally followed in instrumenting a structure, selection and type of instruments, installation and maintenance requirements and data retrieval and processing issues. In addition, a summary section on how recorded response data have been utilized is included. The benefits from instrumentation of structural systems are discussed.
Linear or Rotary Actuator Using Electromagnetic Driven Hammer as Prime Mover
NASA Technical Reports Server (NTRS)
McMahan, Bert K. (Inventor); Sesler, Joshua J. (Inventor); Paine, Matthew T. (Inventor); McMahan, Mark C. (Inventor); Paine, Jeffrey S. N. (Inventor); Smith, Byron F. (Inventor)
2018-01-01
We claim a hammer driven actuator that uses the fast-motion, low-force characteristics of an electro-magnetic or similar prime mover to develop kinetic energy that can be transformed via a friction interface to produce a higher-force, lower-speed linear or rotary actuator by using a hammering process to produce a series of individual steps. Such a system can be implemented using a voice-coil, electro-mechanical solenoid or similar prime mover. Where a typical actuator provides limited range of motion or low force, the range of motion of a linear or rotary impact driven motor can be configured to provide large displacements which are not limited by the characteristic dimensions of the prime mover.
Granovsky, Yelena; Yarnitsky, David
2013-01-01
Experimental pain stimuli can be used to simulate patients’ pain experience. We review recent developments in psychophysical pain testing, focusing on the application of the dynamic tests—conditioned pain modulation (CPM) and temporal summation (TS). Typically, patients with clinical pain of various types express either less efficient CPM or enhanced TS, or both. These tests can be used in prediction of incidence of acquiring pain and of its intensity, as well as in assisting the correct choice of analgesic agents for individual patients. This can help to shorten the commonly occurring long and frustrating process of adjusting analgesic agents to the individual patients. We propose that evaluating pain modulation can serve as a step forward in individualizing pain medicine. PMID:24228167
Granovsky, Yelena; Yarnitsky, David
2013-01-01
Experimental pain stimuli can be used to simulate patients' pain experience. We review recent developments in psychophysical pain testing, focusing on the application of the dynamic tests-conditioned pain modulation (CPM) and temporal summation (TS). Typically, patients with clinical pain of various types express either less efficient CPM or enhanced TS, or both. These tests can be used in prediction of incidence of acquiring pain and of its intensity, as well as in assisting the correct choice of analgesic agents for individual patients. This can help to shorten the commonly occurring long and frustrating process of adjusting analgesic agents to the individual patients. We propose that evaluating pain modulation can serve as a step forward in individualizing pain medicine.
A Continuum of Progress: Applications of N-Hetereocyclic Carbene Catalysis in Total Synthesis
Izquierdo, Javier; Hutson, Gerri E.; Cohen, Daniel T.; Scheidt, Karl A.
2013-01-01
N-Heterocyclic carbene (NHC) catalyzed transformations have emerged as powerful tactics for the construction of complex molecules. Since Stetter’s report in 1975 of the total synthesis of cis-jasmon and dihydrojasmon by using carbene catalysis, the use of NHCs in total synthesis has grown rapidly, particularly over the last decade. This renaissance is undoubtedly due to the recent developments in NHC-catalyzed reactions, including new benzoin, Stetter, homoenolate, and aroylation processes. These transformations employ typical as well as Umpolung types of bond disconnections and have served as the key step in several new total syntheses. This Minireview highlights these reports and captures the excitement and emerging synthetic utility of carbene catalysis in total synthesis. PMID:23074146
The longevity of habitable planets and the development of intelligent life
NASA Astrophysics Data System (ADS)
Simpson, Fergus
2017-07-01
Why did the emergence of our species require a timescale similar to the entire habitable period of our planet? Our late appearance has previously been interpreted by Carter (2008) as evidence that observers typically require a very long development time, implying that intelligent life is a rare occurrence. Here we present an alternative explanation, which simply asserts that many planets possess brief periods of habitability. We also propose that the rate-limiting step for the formation of observers is the enlargement of species from an initially microbial state. In this scenario, the development of intelligent life is a slow but almost inevitable process, greatly enhancing the prospects of future search for extra-terrestrial intelligence (SETI) experiments such as the Breakthrough Listen project.
Boosting pitch encoding with audiovisual interactions in congenital amusia.
Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne
2015-01-01
The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, T. W.; Schena, D.
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and TroyCap LLC, to develop manufacturing steps for commercial production of nano-structure capacitors. The technical objective of this project was to demonstrate high deposition rates of selected dielectric materials which are 2 to 5 times larger than typical using current technology.
NASA Astrophysics Data System (ADS)
McMahon, Ann P.
Educating K-12 students in the processes of design engineering is gaining popularity in public schools. Several states have adopted standards for engineering design despite the fact that no common agreement exists on what should be included in the K-12 engineering design process. Furthermore, little pre-service and in-service professional development exists that will prepare teachers to teach a design process that is fundamentally different from the science teaching process found in typical public schools. This study provides a glimpse into what teachers think happens in engineering design compared to articulated best practices in engineering design. Wenger's communities of practice work and van Dijk's multidisciplinary theory of mental models provide the theoretical bases for comparing the mental models of two groups of elementary teachers (one group that teaches engineering and one that does not) to the mental models of design engineers (including this engineer/researcher/educator and professionals described elsewhere). The elementary school teachers and this engineer/researcher/educator observed the design engineering process enacted by professionals, then answered questions designed to elicit their mental models of the process they saw in terms of how they would teach it to elementary students. The key finding is this: Both groups of teachers embedded the cognitive steps of the design process into the matrix of the social and emotional roles and skills of students. Conversely, the engineers embedded the social and emotional aspects of the design process into the matrix of the cognitive steps of the design process. In other words, teachers' mental models show that they perceive that students' social and emotional communicative roles and skills in the classroom drive their cognitive understandings of the engineering process, while the mental models of this engineer/researcher/educator and the engineers in the video show that we perceive that cognitive understandings of the engineering process drive the social and emotional roles and skills used in that process. This comparison of mental models with the process that professional designers use defines a problem space for future studies that investigate how to incorporate engineering practices into elementary classrooms. Recommendations for engineering curriculum development and teacher professional development based on this study are presented.
How many steps/day are enough? for children and adolescents
2011-01-01
Worldwide, public health physical activity guidelines include special emphasis on populations of children (typically 6-11 years) and adolescents (typically 12-19 years). Existing guidelines are commonly expressed in terms of frequency, time, and intensity of behaviour. However, the simple step output from both accelerometers and pedometers is gaining increased credibility in research and practice as a reasonable approximation of daily ambulatory physical activity volume. Therefore, the purpose of this article is to review existing child and adolescent objectively monitored step-defined physical activity literature to provide researchers, practitioners, and lay people who use accelerometers and pedometers with evidence-based translations of these public health guidelines in terms of steps/day. In terms of normative data (i.e., expected values), the updated international literature indicates that we can expect 1) among children, boys to average 12,000 to 16,000 steps/day and girls to average 10,000 to 13,000 steps/day; and, 2) adolescents to steadily decrease steps/day until approximately 8,000-9,000 steps/day are observed in 18-year olds. Controlled studies of cadence show that continuous MVPA walking produces 3,300-3,500 steps in 30 minutes or 6,600-7,000 steps in 60 minutes in 10-15 year olds. Limited evidence suggests that a total daily physical activity volume of 10,000-14,000 steps/day is associated with 60-100 minutes of MVPA in preschool children (approximately 4-6 years of age). Across studies, 60 minutes of MVPA in primary/elementary school children appears to be achieved, on average, within a total volume of 13,000 to 15,000 steps/day in boys and 11,000 to 12,000 steps/day in girls. For adolescents (both boys and girls), 10,000 to 11,700 may be associated with 60 minutes of MVPA. Translations of time- and intensity-based guidelines may be higher than existing normative data (e.g., in adolescents) and therefore will be more difficult to achieve (but not impossible nor contraindicated). Recommendations are preliminary and further research is needed to confirm and extend values for measured cadences, associated speeds, and MET values in young people; continue to accumulate normative data (expected values) for both steps/day and MVPA across ages and populations; and, conduct longitudinal and intervention studies in children and adolescents required to inform the shape of step-defined physical activity dose-response curves associated with various health parameters. PMID:21798014
A novel pre-processing technique for improving image quality in digital breast tomosynthesis.
Kim, Hyeongseok; Lee, Taewon; Hong, Joonpyo; Sabir, Sohail; Lee, Jung-Ryun; Choi, Young Wook; Kim, Hak Hee; Chae, Eun Young; Cho, Seungryong
2017-02-01
Nonlinear pre-reconstruction processing of the projection data in computed tomography (CT) where accurate recovery of the CT numbers is important for diagnosis is usually discouraged, for such a processing would violate the physics of image formation in CT. However, one can devise a pre-processing step to enhance detectability of lesions in digital breast tomosynthesis (DBT) where accurate recovery of the CT numbers is fundamentally impossible due to the incompleteness of the scanned data. Since the detection of lesions such as micro-calcifications and mass in breasts is the purpose of using DBT, it is justified that a technique producing higher detectability of lesions is a virtue. A histogram modification technique was developed in the projection data domain. Histogram of raw projection data was first divided into two parts: One for the breast projection data and the other for background. Background pixel values were set to a single value that represents the boundary between breast and background. After that, both histogram parts were shifted by an appropriate amount of offset and the histogram-modified projection data were log-transformed. Filtered-backprojection (FBP) algorithm was used for image reconstruction of DBT. To evaluate performance of the proposed method, we computed the detectability index for the reconstructed images from clinically acquired data. Typical breast border enhancement artifacts were greatly suppressed and the detectability of calcifications and masses was increased by use of the proposed method. Compared to a global threshold-based post-reconstruction processing technique, the proposed method produced images of higher contrast without invoking additional image artifacts. In this work, we report a novel pre-processing technique that improves detectability of lesions in DBT and has potential advantages over the global threshold-based post-reconstruction processing technique. The proposed method not only increased the lesion detectability but also reduced typical image artifacts pronounced in conventional FBP-based DBT. © 2016 American Association of Physicists in Medicine.
Particle Bonding Mechanism in Cold Gas Dynamic Spray: A Three-Dimensional Approach
NASA Astrophysics Data System (ADS)
Zhu, Lin; Jen, Tien-Chien; Pan, Yen-Ting; Chen, Hong-Sheng
2017-12-01
Cold gas dynamic spray (CGDS) is a surface coating process that uses highly accelerated particles to form the surface coating. In the CGDS process, metal particles with a diameter of 1-50 µm are carried by a gas stream at high pressure (typically 20-30 atm) through a de Laval-type nozzle to achieve supersonic velocity upon impact onto the substrate. Typically, the impact velocity ranges between 300 and 1200 m/s in the CGDS process. When the particle is accelerated to its critical velocity, which is defined as the minimum in-flight velocity at which it can deposit on the substrate, adiabatic shear instabilities will occur. Herein, to ascertain the critical velocities of different particle sizes on the bonding efficiency in CGDS process, three-dimensional numerical simulations of single particle deposition process were performed. In the CGDS process, one of the most important parameters which determine the bonding strength with the substrate is particle impact temperature. It is hypothesized that the particle will bond to the substrate when the particle's impacting velocity surpasses the critical velocity, at which the interface can achieve 60% of the melting temperature of the particle material (Ref 1, 2). Therefore, critical velocity should be a main parameter on the coating quality. Note that the particle critical velocity is determined not only by its size, but also by its material properties. This study numerically investigates the critical velocity for the particle deposition process in CGDS. In the present numerical analysis, copper (Cu) was chosen as particle material and aluminum (Al) as substrate material. The impacting velocities were selected between 300 and 800 m/s increasing in steps of 100 m/s. The simulation result reveals temporal and spatial interfacial temperature distribution and deformation between particle(s) and substrate. Finally, a comparison is carried out between the computed results and experimental data.
3D superwide-angle one-way propagator and its application in seismic modeling and imaging
NASA Astrophysics Data System (ADS)
Jia, Xiaofeng; Jiang, Yunong; Wu, Ru-Shan
2018-07-01
Traditional one-way wave-equation based propagators have been widely used in past decades. Comparing to two-way propagators, one-way methods have higher efficiency and lower memory demands. These two features are especially important in solving large-scale 3D problems. However, regular one-way propagators cannot simulate waves that propagate in large angles within 90° because of their inherent wide angle limitation. Traditional one-way can only propagate along the determined direction (e.g., z-direction), so simulation of turning waves is beyond the ability of one-way methods. We develop 3D superwide-angle one-way propagator to overcome angle limitation and to simulate turning waves with superwide-angle propagation angle (>90°) for modeling and imaging complex geological structures. Wavefields propagating along vertical and horizontal directions are combined using typical stacking scheme. A weight function related to the propagation angle is used for combining and updating wavefields in each propagating step. In the implementation, we use graphics processing units (GPU) to accelerate the process. Typical workflow is designed to exploit the advantages of GPU architecture. Numerical examples show that the method achieves higher accuracy in modeling and imaging steep structures than regular one-way propagators. Actually, superwide-angle one-way propagator can be applied based on any one-way method to improve the effects of seismic modeling and imaging.
NASA Astrophysics Data System (ADS)
Maier, A.; Schledjewski, R.
2016-07-01
For continuous manufacturing processes mechanical preloading of the fibers occurs during the delivery of the fibers from the spool creel to the actual manufacturing process step. Moreover preloading of the dry roving bundles might be mandatory, e.g. during winding, to be able to produce high quality components. On the one hand too high tensile loads within dry roving bundles might result in a catastrophic failure and on the other hand the part produced under too low pre-tension might have low quality and mechanical properties. In this work, load conditions influencing mechanical properties of dry glass fiber bundles during continuous composite manufacturing processes were analyzed. Load conditions, i.e. fiber delivery speed, necessary pre-tension and other effects of the delivery system during continuous fiber winding, were chosen in process typical ranges. First, the strain rate dependency under static tensile load conditions was investigated. Furthermore different free gauge lengths up to 1.2 m, interactions between fiber points of contact regarding influence of sizing as well as impregnation were tested and the effect of twisting on the mechanical behavior of dry glass fiber bundles during the fiber delivery was studied.
2017-01-01
Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing. PMID:29023597
A quick response four decade logarithmic high-voltage stepping supply
NASA Technical Reports Server (NTRS)
Doong, H.
1978-01-01
An improved high-voltage stepping supply, for space instrumentation is described where low power consumption and fast settling time between steps are required. The high-voltage stepping supply, utilizing an average power of 750 milliwatts, delivers a pair of mirror images with 64 level logarithmic outputs. It covers a four decade range of + or - 2500 to + or - 0.29 volts having an output stability of + or - 0.5 percent or + or - 20 millivolts for all line load and temperature variations. The supply provides a typical step setting time of 1 millisecond with 100 microseconds for the lower two decades. The versatile design features of the high-voltage stepping supply provides a quick response staircase generator as described or a fixed voltage with the option to change levels as required over large dynamic ranges without circuit modifications. The concept can be implemented up to + or - 5000 volts. With these design features, the high-voltage stepping supply should find numerous applications where charged particle detection, electro-optical systems, and high voltage scientific instruments are used.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Aerospace Fuels From Nonpetroleum Raw Materials
NASA Technical Reports Server (NTRS)
Palaszewski, Bryan A.; Hepp, Aloysius F.; Kulis, Michael J.; Jaworske, Donald A.
2013-01-01
Recycling human metabolic and plastic wastes minimizes cost and increases efficiency by reducing the need to transport consumables and return trash, respectively, from orbit to support a space station crew. If the much larger costs of transporting consumables to the Moon and beyond are taken into account, developing waste recycling technologies becomes imperative and possibly mission enabling. Reduction of terrestrial waste streams while producing energy and/or valuable raw materials is an opportunity being realized by a new generation of visionary entrepreneurs; several relevant technologies are briefly compared, contrasted and assessed for space applications. A two-step approach to nonpetroleum raw materials utilization is presented; the first step involves production of supply or producer gas. This is akin to synthesis gas containing carbon oxides, hydrogen, and simple hydrocarbons. The second step involves production of fuel via the Sabatier process, a methanation reaction, or another gas-to-liquid technology, typically Fischer-Tropsch processing. Optimization to enhance the fraction of product stream relevant to transportation fuels via catalytic (process) development at NASA Glenn Research Center is described. Energy utilization is a concern for production of fuels whether for operation on the lunar or Martian surface, or beyond. The term green relates to not only mitigating excess carbon release but also to the efficiency of energy usage. For space, energy usage can be an essential concern. Another issue of great concern is minimizing impurities in the product stream(s), especially those that are potential health risks and/or could degrade operations through catalyst poisoning or equipment damage; technologies being developed to remove heteroatom impurities are discussed. Alternative technologies to utilize waste fluids, such as a propulsion option called the resistojet, are discussed. The resistojet is an electric propulsion technology with a powered thruster to vaporize and heat a propellant to high temperature, hot gases are subsequently passed through a converging-diverging nozzle expanding gases to supersonic velocities. A resistojet can accommodate many different fluids, including various reaction chamber (by-)products.
Aerospace Fuels from Nonpetroleum Raw Materials
NASA Technical Reports Server (NTRS)
Palaszewski, B. A.; Hepp, A. F.; Kulis, M. J.; Jaworske, D. A.
2013-01-01
Recycling human metabolic and plastic wastes minimizes cost and increases efficiency by reducing the need to transport consumables and return trash, respectively, from orbit to support a space station crew. If the much larger costs of transporting consumables to the Moon and beyond are taken into account, developing waste recycling technologies becomes imperative and possibly mission enabling. Reduction of terrestrial waste streams while producing energy and/or valuable raw materials is an opportunity being realized by a new generation of visionary entrepreneurs; several relevant technologies are briefly compared, contrasted and assessed for space applications. A two-step approach to nonpetroleum raw materials utilization is presented; the first step involves production of supply or producer gas. This is akin to synthesis gas containing carbon oxides, hydrogen, and simple hydrocarbons. The second step involves production of fuel via the Sabatier process, a methanation reaction, or another gas-to-liquid technology, typically Fischer- Tropsch processing. Optimization to enhance the fraction of product stream relevant to transportation fuels via catalytic (process) development at NASA GRC is described. Energy utilization is a concern for production of fuels whether for operation on the lunar or Martian surface, or beyond. The term "green" relates to not only mitigating excess carbon release but also to the efficiency of energy usage. For space, energy usage can be an essential concern. Other issues of great concern include minimizing impurities in the product stream(s), especially those that are potential health risks and/or could de-grade operations through catalyst poisoning or equipment damage; technologies being developed to remove heteroatom impurities are discussed. Alternative technologies to utilize waste fluids, such as a propulsion option called the resistojet, are discussed. The resistojet is an electric propulsion technology with a powered thruster to vaporize and heat a propellant to high temperature, hot gases are subsequently passed through a converging-diverging nozzle expanding gases to supersonic velocities. A resistojet can accommodate many different fluids, including various reaction chamber (by-)products.
Transportation Impact Evaluation System
DOT National Transportation Integrated Search
1979-11-01
This report specifies a framework for spatial analysis and the general modelling steps required. It also suggests available urban and regional data sources, along with some typical existing urban and regional models. The goal is to develop a computer...
A forestry application simulation of man-machine techniques for analyzing remotely sensed data
NASA Technical Reports Server (NTRS)
Berkebile, J.; Russell, J.; Lube, B.
1976-01-01
The typical steps in the analysis of remotely sensed data for a forestry applications example are simulated. The example uses numerically-oriented pattern recognition techniques and emphasizes man-machine interaction.
NASA Astrophysics Data System (ADS)
Xue, Xiaochun; Yu, Yonggang; Mang, Shanshan
2017-07-01
Data are presented showing that the problem of gas-liquid interaction instability is an important subject in the combustion and the propellant projectile motion process of a bulk-loaded liquid propellant gun (BLPG). The instabilities themselves arise from the sources, including fluid motion, to form a combustion gas cavity called Taylor cavity, fluid turbulence and breakup caused by liquid motion relative to the combustion chamber walls, and liquid surface breakup arising from a velocity mismatch on the gas-liquid interface. Typically, small disturbances that arise early in the BLPG combustion interior ballistic cycle can become amplified in the absence of burn rate limiting characteristics. Herein, significant attention has been given to developing and emphasizing the need for better combustion repeatability in the BLPG. Based on this goal, the concept of using different geometries of the combustion chamber is introduced and the concept of using a stepped-wall structure on the combustion chamber itself as a useful means of exerting boundary control on the combustion evolution to thus restrain the combustion instability has been verified experimentally in this work. Moreover, based on this background, the numerical simulation is devoted to a special combustion issue under transient high-pressure and high-temperature conditions, namely, studying the combustion mechanism in a stepped-wall combustion chamber with full monopropellant on one end that is stationary and the other end can move at high speed. The numerical results also show that the burning surface of the liquid propellant can be defined geometrically and combustion is well behaved as ignition and combustion progressivity are in a suitable range during each stage in this combustion chamber with a stepped-wall structure.
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
NASA Astrophysics Data System (ADS)
Dos Santos Ferreira, Olavio; Sadat Gousheh, Reza; Visser, Bart; Lie, Kenrick; Teuwen, Rachel; Izikson, Pavel; Grzela, Grzegorz; Mokaberi, Babak; Zhou, Steve; Smith, Justin; Husain, Danish; Mandoy, Ram S.; Olvera, Raul
2018-03-01
Ever increasing need for tighter on-product overlay (OPO), as well as enhanced accuracy in overlay metrology and methodology, is driving semiconductor industry's technologists to innovate new approaches to OPO measurements. In case of High Volume Manufacturing (HVM) fabs, it is often critical to strive for both accuracy and robustness. Robustness, in particular, can be challenging in metrology since overlay targets can be impacted by proximity of other structures next to the overlay target (asymmetric effects), as well as symmetric stack changes such as photoresist height variations. Both symmetric and asymmetric contributors have impact on robustness. Furthermore, tweaking or optimizing wafer processing parameters for maximum yield may have an adverse effect on physical target integrity. As a result, measuring and monitoring physical changes or process abnormalities/artefacts in terms of new Key Performance Indicators (KPIs) is crucial for the end goal of minimizing true in-die overlay of the integrated circuits (ICs). IC manufacturing fabs often relied on CD-SEM in the past to capture true in-die overlay. Due to destructive and intrusive nature of CD-SEMs on certain materials, it's desirable to characterize asymmetry effects for overlay targets via inline KPIs utilizing YieldStar (YS) metrology tools. These KPIs can also be integrated as part of (μDBO) target evaluation and selection for final recipe flow. In this publication, the Holistic Metrology Qualification (HMQ) flow was extended to account for process induced (asymmetric) effects such as Grating Imbalance (GI) and Bottom Grating Asymmetry (BGA). Local GI typically contributes to the intrafield OPO whereas BGA typically impacts the interfield OPO, predominantly at the wafer edge. Stack height variations highly impact overlay metrology accuracy, in particular in case of multi-layer LithoEtch Litho-Etch (LELE) overlay control scheme. Introducing a GI impact on overlay (in nm) KPI check quantifies the grating imbalance impact on overlay, whereas optimizing for accuracy using self-reference captures the bottom grating asymmetry effect. Measuring BGA after each process step before exposure of the top grating helps to identify which specific step introduces the asymmetry in the bottom grating. By evaluating this set of KPI's to a BEOL LELE overlay scheme, we can enhance robustness of recipe selection and target selection. Furthermore, these KPIs can be utilized to highlight process and equipment abnormalities. In this work, we also quantified OPO results with a self-contained methodology called Triangle Method. This method can be utilized for LELE layers with a common target and reference. This allows validating general μDBO accuracy, hence reducing the need for CD-SEM verification.
Fernandes, Ricardo; Koudelka, Tomas; Tholey, Andreas; Dreves, Alexander
2017-07-15
AMS-radiocarbon measurements of amino acids can potentially provide more reliable radiocarbon dates than bulk collagen analysis. Nonetheless, the applicability of such an approach is often limited by the low-throughput of existing isolation methods and difficulties in determining the contamination introduced during the separation process. A novel tertiary prep-HPLC amino acid isolation method was developed that relies on the combustion of eluted material without requiring any additional chemical steps. Amino acid separation was carried out using a gradient mix of pure water and phosphoric acid with an acetonitrile step in-between runs to remove hydrophobic molecules from the separation column. The amount of contaminant carbon and its 14 C content were determined from two-point measurements of collagen samples of known 14 C content. The amount of foreign carbon due to the isolation process was estimated at 4±1μg and its 14 C content was 0.43±0.01 F 14 C. Radiocarbon values corrected for carbon contamination have only a minor increase in uncertainties. For Holocene samples, this corresponds to an added uncertainty typically smaller than 10 14 Cyears. The developed method can be added to routine AMS measurements without implying significant operational changes and offers a level of measurement uncertainty that is suitable for many archaeological, ecological, environmental, and biological applications. Copyright © 2017. Published by Elsevier B.V.
Towards a seascape typology. I. Zipf versus Pareto laws
NASA Astrophysics Data System (ADS)
Seuront, Laurent; Mitchell, James G.
Two data analysis methods, referred to as the Zipf and Pareto methods, initially introduced in economics and linguistics two centuries ago and subsequently used in a wide range of fields (word frequency in languages and literature, human demographics, finance, city formation, genomics and physics), are described and proposed here as a potential tool to classify space-time patterns in marine ecology. The aim of this paper is, first, to present the theoretical bases of Zipf and Pareto laws, and to demonstrate that they are strictly equivalent. In that way, we provide a one-to-one correspondence between their characteristic exponents and argue that the choice of technique is a matter of convenience. Second, we argue that the appeal of this technique is that it is assumption-free for the distribution of the data and regularity of sampling interval, as well as being extremely easy to implement. Finally, in order to allow marine ecologists to identify and classify any structure in their data sets, we provide a step by step overview of the characteristic shapes expected for Zipf's law for the cases of randomness, power law behavior, power law behavior contaminated by internal and external noise, and competing power laws illustrated on the basis of typical ecological situations such as mixing processes involving non-interacting and interacting species, phytoplankton growth processes and differential grazing by zooplankton.
Preparation of nanowire specimens for laser-assisted atom probe tomography
NASA Astrophysics Data System (ADS)
Blumtritt, H.; Isheim, D.; Senz, S.; Seidman, D. N.; Moutanabbir, O.
2014-10-01
The availability of reliable and well-engineered commercial instruments and data analysis software has led to development in recent years of robust and ergonomic atom-probe tomographs. Indeed, atom-probe tomography (APT) is now being applied to a broader range of materials classes that involve highly important scientific and technological problems in materials science and engineering. Dual-beam focused-ion beam microscopy and its application to the fabrication of APT microtip specimens have dramatically improved the ability to probe a variety of systems. However, the sample preparation is still challenging especially for emerging nanomaterials such as epitaxial nanowires which typically grow vertically on a substrate through metal-catalyzed vapor phase epitaxy. The size, morphology, density, and sensitivity to radiation damage are the most influential parameters in the preparation of nanowire specimens for APT. In this paper, we describe a step-by-step process methodology to allow a precisely controlled, damage-free transfer of individual, short silicon nanowires onto atom probe microposts. Starting with a dense array of tiny nanowires and using focused ion beam, we employed a sequence of protective layers and markers to identify the nanowire to be transferred and probed while protecting it against Ga ions during lift-off processing and tip sharpening. Based on this approach, high-quality three-dimensional atom-by-atom maps of single aluminum-catalyzed silicon nanowires are obtained using a highly focused ultraviolet laser-assisted local electrode atom probe tomograph.
Phase holograms in silver halide emulsions without a bleaching step
NASA Astrophysics Data System (ADS)
Belendez, Augusto; Madrigal, Roque F.; Pascual, Inmaculada V.; Fimia, Antonio
2000-03-01
Phase holograms in holographic emulsions are usually obtained by two bath processes (developing and bleaching). In this work we present a one step method to reach phase holograms with silver-halide emulsions. Which is based on the variation of the conditions of the typical developing processes of amplitude holograms. For this, we have used the well-known chemical developer, AAC, which is composed by ascorbic acid as a developing agent and sodium carbonate anhydrous as accelerator. Agfa 8E75 HD and BB-640 plates were used to obtain these phase gratings, whose colors are between yellow and brown. In function of the parameters of this developing method the resulting diffraction efficiency and optical density of the diffraction gratings were studied. One of these parameters studied is the influence of the grain size. In the case of Agfa plates diffraction efficiency around 18% with density < 1 has been reached, whilst with the BB-640 emulsion, whose grain is smaller than that of the Agfa, diffraction efficiency near 30% has been obtained. The resulting gratings were analyzed through X-ray spectroscopy showing the differences of the structure of the developed silver when amplitude and transmission gratings are obtained. The angular response of both (transmission and amplitude) gratings were studied, where minimal transmission is showed at the Braggs angle in phase holograms, whilst a maximal value is obtained in amplitude gratings.
25 CFR 15.11 - What are the basic steps of the probate process?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false What are the basic steps of the probate process? 15.11 Section 15.11 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR PROBATE PROBATE OF INDIAN... are the basic steps of the probate process? The basic steps of the probate process are: (a) We learn...
Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian
2017-10-10
The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.
48 CFR 15.202 - Advisory multi-step process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Advisory multi-step... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... participate in the acquisition. This process should not be used for multi-step acquisitions where it would...
An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space
NASA Astrophysics Data System (ADS)
Kwan, Trevor Hocksun; Wu, Xiaofeng
2017-03-01
Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.
Yuen, Alexander; Wojtecki, Rudy J.; Hedrick, James L.; García, Jeannette M.
2016-01-01
It is estimated that ∼2.7 million tons poly(carbonate)s (PCs) are produced annually worldwide. In 2008, retailers pulled products from store shelves after reports of bisphenol A (BPA) leaching from baby bottles, reusable drink bottles, and other retail products. Since PCs are not typically recycled, a need for the repurposing of the PC waste has arisen. We report the one-step synthesis of poly(aryl ether sulfone)s (PSUs) from the depolymerization of PCs and in situ polycondensation with bis(aryl fluorides) in the presence of carbonate salts. PSUs are high-performance engineering thermoplastics that are commonly used for reverse osmosis and water purification membranes, medical equipment, as well as high temperature applications. PSUs generated through this cascade approach were isolated in high purity and yield with the expected thermal properties and represent a procedure for direct conversion of one class of polymer to another in a single step. Computational investigations performed with density functional theory predict that the carbonate salt plays two important catalytic roles in this reaction: it decomposes the PCs by nucleophilic attack, and in the subsequent polyether formation process, it promotes the reaction of phenolate dimers formed in situ with the aryl fluorides present. We envision repurposing poly(BPA carbonate) for the production of value-added polymers. PMID:27354514
Jones, Gavin O; Yuen, Alexander; Wojtecki, Rudy J; Hedrick, James L; García, Jeannette M
2016-07-12
It is estimated that ∼2.7 million tons poly(carbonate)s (PCs) are produced annually worldwide. In 2008, retailers pulled products from store shelves after reports of bisphenol A (BPA) leaching from baby bottles, reusable drink bottles, and other retail products. Since PCs are not typically recycled, a need for the repurposing of the PC waste has arisen. We report the one-step synthesis of poly(aryl ether sulfone)s (PSUs) from the depolymerization of PCs and in situ polycondensation with bis(aryl fluorides) in the presence of carbonate salts. PSUs are high-performance engineering thermoplastics that are commonly used for reverse osmosis and water purification membranes, medical equipment, as well as high temperature applications. PSUs generated through this cascade approach were isolated in high purity and yield with the expected thermal properties and represent a procedure for direct conversion of one class of polymer to another in a single step. Computational investigations performed with density functional theory predict that the carbonate salt plays two important catalytic roles in this reaction: it decomposes the PCs by nucleophilic attack, and in the subsequent polyether formation process, it promotes the reaction of phenolate dimers formed in situ with the aryl fluorides present. We envision repurposing poly(BPA carbonate) for the production of value-added polymers.
NASA Astrophysics Data System (ADS)
Ha, Sanghyun; Park, Junshin; You, Donghyun
2018-01-01
Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.
NASA Astrophysics Data System (ADS)
Hegde, Ananda; Sharma, Sathyashankara
2018-05-01
Austempered Ductile Iron (ADI) is a revolutionary material with high strength and hardness combined with optimum ductility and toughness. The discovery of two step austempering process has lead to the superior combination of all the mechanical properties. However, because of the high strength and hardness of ADI, there is a concern regarding its machinability. In the present study, machinability of ADI produced using conventional and two step heat treatment processes is assessed using tool life and the surface roughness. Speed, feed and depth of cut are considered as the machining parameters in the dry turning operation. The machinability results along with the mechanical properties are compared for ADI produced using both conventional and two step austempering processes. The results have shown that two step austempering process has produced better toughness with good hardness and strength without sacrificing ductility. Addition of 0.64 wt% manganese did not cause any detrimental effect on the machinability of ADI, both in conventional and two step processes. Marginal improvement in tool life and surface roughness were observed in two step process compared to that with conventional process.
Using Resin-Based 3D Printing to Build Geometrically Accurate Proxies of Porous Sedimentary Rocks.
Ishutov, Sergey; Hasiuk, Franciszek J; Jobe, Dawn; Agar, Susan
2018-05-01
Three-dimensional (3D) printing is capable of transforming intricate digital models into tangible objects, allowing geoscientists to replicate the geometry of 3D pore networks of sedimentary rocks. We provide a refined method for building scalable pore-network models ("proxies") using stereolithography 3D printing that can be used in repeated flow experiments (e.g., core flooding, permeametry, porosimetry). Typically, this workflow involves two steps, model design and 3D printing. In this study, we explore how the addition of post-processing and validation can reduce uncertainty in the 3D-printed proxy accuracy (difference of proxy geometry from the digital model). Post-processing is a multi-step cleaning of porous proxies involving pressurized ethanol flushing and oven drying. Proxies are validated by: (1) helium porosimetry and (2) digital measurements of porosity from thin-section images of 3D-printed proxies. 3D printer resolution was determined by measuring the smallest open channel in 3D-printed "gap test" wafers. This resolution (400 µm) was insufficient to build porosity of Fontainebleau sandstone (∼13%) from computed tomography data at the sample's natural scale, so proxies were printed at 15-, 23-, and 30-fold magnifications to validate the workflow. Helium porosities of the 3D-printed proxies differed from digital calculations by up to 7% points. Results improved after pressurized flushing with ethanol (e.g., porosity difference reduced to ∼1% point), though uncertainties remain regarding the nature of sub-micron "artifact" pores imparted by the 3D printing process. This study shows the benefits of including post-processing and validation in any workflow to produce porous rock proxies. © 2017, National Ground Water Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Gyoung Gug; Song, Bo; Li, Liyi
This paper reported a novel two-step process to fabricate high-performance supercapacitor films that contain microscale domains of nano-interspaced, re-stacked graphene sheets oriented perpendicular to the surface of current collector substrate, i.e., carbon fiber paper. In the two-step process, we first used ligand molecules to modify the surface of graphene oxide (GO) sheets and manipulate the interspacing between the re-stacked GO sheets. The ligand-modified GOs, i.e., m-GOs, were then reduced to obtain more conductive graphene (m-rGO), where X-ray diffraction measurement results indicated well-controlled interlayer spacing between the restacked m-rGO sheets up to 1 nm. The typical lateral dimension of the restackedmore » m-rGO sheets were ~40 µm. Then, electrical field was introduced during m-rGO slurry deposition process to induce the vertical orientation of the m-rGO sheets/stacks in the film deposit. The direct current electrical field induced the orientation of the domains of m-rGO stacks along the direction perpendicular to the surface of deposit film, i.e., direction of electric field. Also, the applied electric field increased the interlayer spacing further, which should enhance the diffusion and accessibility of electrolyte ions. As compared with the traditionally deposited “control” films, the field-processed film deposits that contain oriented structure of graphene sheets/stacks have shown up to ~1.6 times higher values in capacitance (430 F/g at 0.5 A/g) and ~67% reduction in equivalent series resistance. Finally, the approach of using electric field to tailor the microscopic architecture of graphene-based deposit films is effective to fabricate film electrodes for high performance supercapacitors.« less
Jang, Gyoung Gug; Song, Bo; Li, Liyi; ...
2016-12-14
This paper reported a novel two-step process to fabricate high-performance supercapacitor films that contain microscale domains of nano-interspaced, re-stacked graphene sheets oriented perpendicular to the surface of current collector substrate, i.e., carbon fiber paper. In the two-step process, we first used ligand molecules to modify the surface of graphene oxide (GO) sheets and manipulate the interspacing between the re-stacked GO sheets. The ligand-modified GOs, i.e., m-GOs, were then reduced to obtain more conductive graphene (m-rGO), where X-ray diffraction measurement results indicated well-controlled interlayer spacing between the restacked m-rGO sheets up to 1 nm. The typical lateral dimension of the restackedmore » m-rGO sheets were ~40 µm. Then, electrical field was introduced during m-rGO slurry deposition process to induce the vertical orientation of the m-rGO sheets/stacks in the film deposit. The direct current electrical field induced the orientation of the domains of m-rGO stacks along the direction perpendicular to the surface of deposit film, i.e., direction of electric field. Also, the applied electric field increased the interlayer spacing further, which should enhance the diffusion and accessibility of electrolyte ions. As compared with the traditionally deposited “control” films, the field-processed film deposits that contain oriented structure of graphene sheets/stacks have shown up to ~1.6 times higher values in capacitance (430 F/g at 0.5 A/g) and ~67% reduction in equivalent series resistance. Finally, the approach of using electric field to tailor the microscopic architecture of graphene-based deposit films is effective to fabricate film electrodes for high performance supercapacitors.« less
The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.
The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less
Walker, Lindsay; Chang, Lin-Ching; Nayak, Amritha; Irfanoglu, M Okan; Botteron, Kelly N; McCracken, James; McKinstry, Robert C; Rivkin, Michael J; Wang, Dah-Jyuu; Rumsey, Judith; Pierpaoli, Carlo
2016-01-01
The NIH MRI Study of normal brain development sought to characterize typical brain development in a population of infants, toddlers, children and adolescents/young adults, covering the socio-economic and ethnic diversity of the population of the United States. The study began in 1999 with data collection commencing in 2001 and concluding in 2007. The study was designed with the final goal of providing a controlled-access database; open to qualified researchers and clinicians, which could serve as a powerful tool for elucidating typical brain development and identifying deviations associated with brain-based disorders and diseases, and as a resource for developing computational methods and image processing tools. This paper focuses on the DTI component of the NIH MRI study of normal brain development. In this work, we describe the DTI data acquisition protocols, data processing steps, quality assessment procedures, and data included in the database, along with database access requirements. For more details, visit http://www.pediatricmri.nih.gov. This longitudinal DTI dataset includes raw and processed diffusion data from 498 low resolution (3 mm) DTI datasets from 274 unique subjects, and 193 high resolution (2.5 mm) DTI datasets from 152 unique subjects. Subjects range in age from 10 days (from date of birth) through 22 years. Additionally, a set of age-specific DTI templates are included. This forms one component of the larger NIH MRI study of normal brain development which also includes T1-, T2-, proton density-weighted, and proton magnetic resonance spectroscopy (MRS) imaging data, and demographic, clinical and behavioral data. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Surface conductance of graphene from non-contact resonant cavity.
Obrzut, Jan; Emiroglu, Caglar; Kirillov, Oleg; Yang, Yanfei; Elmquist, Randolph E
2016-06-01
A method is established to reliably determine surface conductance of single-layer or multi-layer atomically thin nano-carbon graphene structures. The measurements are made in an air filled standard R100 rectangular waveguide configuration at one of the resonant frequency modes, typically at TE 103 mode of 7.4543 GHz. Surface conductance measurement involves monitoring a change in the quality factor of the cavity as the specimen is progressively inserted into the cavity in quantitative correlation with the specimen surface area. The specimen consists of a nano-carbon-layer supported on a low loss dielectric substrate. The thickness of the conducting nano-carbon layer does not need to be explicitly known, but it is assumed that the lateral dimension is uniform over the specimen area. The non-contact surface conductance measurements are illustrated for a typical graphene grown by chemical vapor deposition process, and for a high quality monolayer epitaxial graphene grown on silicon carbide wafers for which we performed non-gated quantum Hall resistance measurements. The sequence of quantized transverse Hall resistance at the Landau filling factors ν = ±6 and ±2, and the absence of the Hall plateau at ν = 4 indicate that the epitaxially grown graphene is a high quality mono-layer. The resonant microwave cavity measurement is sensitive to the surface and bulk conductivity, and since no additional processing is required, it preserves the integrity of the conductive graphene layer. It allows characterization with high speed, precision and efficiency, compared to transport measurements where sample contacts must be defined and applied in multiple processing steps.
Delivery of high intensity beams with large clad step-index fibers for engine ignition
NASA Astrophysics Data System (ADS)
Joshi, Sachin; Wilvert, Nick; Yalin, Azer P.
2012-09-01
We show, for the first time, that step-index silica fibers with a large clad (400 μm core and 720 μm clad) can be used to transmit nanosecond duration pulses in a way that allows reliable (consistent) spark formation in atmospheric pressure air by the focused output light from the fiber. The high intensity (>100 GW/cm2) of the focused output light is due to the combination of high output power (typical of fibers of this core size) with high output beam quality (better than that typical of fibers of this core size). The high output beam quality, which enables tight focusing, is due to the large clad which suppresses microbending-induced diffusion of modal power to higher order modes owing to the increased rigidity of the core-clad interface. We also show that extending the pulse duration provides a means to increase the delivered pulse energy (>20 mJ delivered for 50 ns pulses) without causing fiber damage. Based on this ability to deliver high energy sparks, we report the first reliable laser ignition of a natural gas engine including startup under typical procedures using silica fiber optics for pulse delivery.
Bjornson, Kristie F; Belza, Basia; Kartin, Deborah; Logsdon, Rebecca; McLaughlin, John F
2007-01-01
Background and Purpose Assessment of walking activity in youth with cerebral palsy (CP) has traditionally been “capacity-based.” The purpose of this study was to describe the day-to-day ambulatory activity “performance” of youth with CP compared with youth who were developing typically. Subjects Eighty-one youth with CP, aged 10 to 13 years, who were categorized as being in Gross Motor Function Classification System (GMFCS) levels I to III and 30 age-matched youth who were developing typically were recruited. Methods Using a cross-sectional design, participants wore the StepWatch monitor for 7 days while documenting average daily total step counts, percentage of all time active, ratio of medium to low activity levels, and percentage of time at high activity levels. Results The youth with CP demonstrated significantly lower levels of all outcomes than the comparison group. Discussion and Conclusion Daily walking activity and variability decreased as functional walking level (GMFCS level) decreased. Ambulatory activity performance within the context of the daily life for youth with CP appears valid and feasible as an outcome for mobility interventions in CP. PMID:17244693
High throughput nanoimprint lithography for semiconductor memory applications
NASA Astrophysics Data System (ADS)
Ye, Zhengmao; Zhang, Wei; Khusnatdinov, Niyaz; Stachowiak, Tim; Irving, J. W.; Longsine, Whitney; Traub, Matthew; Fletcher, Brian; Liu, Weijun
2017-03-01
Imprint lithography is a promising technology for replication of nano-scale features. For semiconductor device applications, Canon deposits a low viscosity resist on a field by field basis using jetting technology. A patterned mask is lowered into the resist fluid which then quickly flows into the relief patterns in the mask by capillary action. Following this filling step, the resist is crosslinked under UV radiation, and then the mask is removed, leaving a patterned resist on the substrate. There are two critical components to meeting throughput requirements for imprint lithography. Using a similar approach to what is already done for many deposition and etch processes, imprint stations can be clustered to enhance throughput. The FPA-1200NZ2C is a four station cluster system designed for high volume manufacturing. For a single station, throughput includes overhead, resist dispense, resist fill time (or spread time), exposure and separation. Resist exposure time and mask/wafer separation are well understood processing steps with typical durations on the order of 0.10 to 0.20 seconds. To achieve a total process throughput of 17 wafers per hour (wph) for a single station, it is necessary to complete the fluid fill step in 1.2 seconds. For a throughput of 20 wph, fill time must be reduced to only one 1.1 seconds. There are several parameters that can impact resist filling. Key parameters include resist drop volume (smaller is better), system controls (which address drop spreading after jetting), Design for Imprint or DFI (to accelerate drop spreading) and material engineering (to promote wetting between the resist and underlying adhesion layer). In addition, it is mandatory to maintain fast filling, even for edge field imprinting. In this paper, we address the improvements made in all of these parameters to first enable a 1.20 second filling process for a device like pattern and have demonstrated this capability for both full fields and edge fields. Non-fill defectivity is well under 1.0 defects/cm2 for both field types. Next, by further reducing drop volume and optimizing drop patterns, a fill time of 1.1 seconds was demonstrated.
The Impact of ARM on Climate Modeling. Chapter 26
NASA Technical Reports Server (NTRS)
Randall, David A.; Del Genio, Anthony D.; Donner, Leo J.; Collins, William D.; Klein, Stephen A.
2016-01-01
Climate models are among humanity's most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability, and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of the Earth down to one hundred kilometers or smaller, and implicitly include the effects of processes on even smaller scales down to a micron or so. The atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM). In an AGCM, calculations are done on a three-dimensional grid, which in some of today's climate models consists of several million grid cells. For each grid cell, about a dozen variables are time-stepped as the model integrates forward from its initial conditions. These so-called prognostic variables have special importance because they are the only things that a model remembers from one time step to the next; everything else is recreated on each time step by starting from the prognostic variables and the boundary conditions. The prognostic variables typically include information about the mass of dry air, the temperature, the wind components, water vapor, various condensed-water species, and at least a few chemical species such as ozone. A good way to understand how climate models work is to consider the lengthy and complex process used to develop one. Lets imagine that a new AGCM is to be created, starting from a blank piece of paper. The model may be intended for a particular class of applications, e.g., high-resolution simulations on time scales of a few decades. Before a single line of code is written, the conceptual foundation of the model must be designed through a creative envisioning that starts from the intended application and is based on current understanding of how the atmosphere works and the inventory of mathematical methods available.
Comprehensive analysis of statistical and model-based overlay lot disposition methods
NASA Astrophysics Data System (ADS)
Crow, David A.; Flugaur, Ken; Pellegrini, Joseph C.; Joubert, Etienne L.
2001-08-01
Overlay lot disposition algorithms in lithography occupy some of the highest leverage decision points in the microelectronic manufacturing process. In a typical large volume sub-0.18micrometers fab the lithography lot disposition decision is made about 500 times per day. Each decision will send a lot of wafers either to the next irreversible process step or back to rework in an attempt to improve unacceptable overlay performance. In the case of rework, the intention is that the reworked lot will represent better yield (and thus more value) than the original lot and that the enhanced lot value will exceed the cost of rework. Given that the estimated cost of reworking a critical-level lot is around 10,000 (based upon the opportunity cost of consuming time on a state-of-the-art DUV scanner), we are faced with the implication that the lithography lot disposition decision process impacts up to 5 million per day in decisions. That means that a 1% error rate in this decision process represents over 18 million per year lost in profit for a representative sit. Remarkably, despite this huge leverage, the lithography lot disposition decision algorithm usually receives minimal attention. In many cases, this lack of attention has resulted in the retention of sub-optimal algorithms from earlier process generations and a significant negative impact on the economic output of many high-volume manufacturing sites. An ideal lot- dispositioning algorithm would be an algorithm that results into the best economic decision being made every time - lots would only be reworked where the expected value (EV) of the reworked lot minus the expected value of the original lot exceeds the cost of the rework: EV(reworked lot)- EV(original lot)>COST(rework process) Calculating the above expected values in real-time has generally been deemed too complicated and maintenance-intensive to be practical for fab operations, so a simplified rule is typically used.
Localization of congenital tegmen tympani defects.
Tóth, Miklós; Helling, Kai; Baksa, Gábor; Mann, Wolf
2007-12-01
This study sets out to demonstrate the normal developmental steps of the tegmen tympani and thus explains the typical localization of congenital tegmental defects. For this study, 79 macerated and formalin-fixed human temporal bones from 14th fetal week to adults were observed and prepared. Macroscopic and microscopic examination of the prenatal and postnatal changes of the tegmen tympani during its development. Temporal bones from 14th fetal week to adults underwent descriptive anatomic studies to understand the normal development of the tegmen tympani and to find a possible cause of its congenital defects. The medial part of the tegmen tympani develops from the otic capsule during chondral ossification, thus forming the tegmental process of the petrous part. The lateral part shows membranous ossification. The tegmental process cases a temporary bony dehiscence lateral to the geniculate ganglion between the 23rd and 25th fetal week. Congenital defects develop near the geniculate ganglion and seem to be due to an incomplete development of tegmental process of otic capsule. Because of that, congenital lesion of the tegmen tympani can be defined as an inner ear defect.
Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura
2016-03-26
The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children's oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.
Utilizing Stable Isotopes and Isotopic Anomalies to Study Early Solar System Formation Processes
NASA Technical Reports Server (NTRS)
Simon, Justin
2017-01-01
Chondritic meteorites contain a diversity of particle components, i.e., chondrules and calcium-, aluminum-rich refractory inclusions (CAIs), that have survived since the formation of the Solar System. The chemical and isotopic compositions of these materials provide a record of the conditions present in the protoplanetary disk where they formed and can aid our understanding of the processes and reservoirs in which solids formed in the solar nebula, an important step leading to the accretion of planetesimals. Isotopic anomalies associated with nucleosynthetic processes are observed in these discrete materials, and can be compared to astronomical observations and astrophysical formation models of stars and more recently proplyds. The existence and size of these isotopic anomalies are typically thought to reflect a significant state of isotopic heterogeneity in the earliest Solar System, likely left over from molecular cloud heterogeneities on the grain scale, but some could also be due to late stellar injection. The homogenization of these isotopic anomalies towards planetary values can be used to track the efficiency and timescales of disk wide mixing,
Flight test of a synthetic aperture radar antenna using STEP
NASA Technical Reports Server (NTRS)
Zimcik, D. G.; Vigeron, F. R.; Ahmed, S.
1984-01-01
To establish confidence in its overall performance, credible information on the synthetic aperture radar antenna's mechanical properties in orbit must be obtained. However, the antenna's size, design, and operating environment make it difficult to simulate operating conditions under 1-g Earth conditions. The Space Technology Experiments Platform (STEP) offers a timely opportunity to mechanically qualify and characterize the antenna design in a representative environment. The proposed experimental configuration would employ a half-system of the full-scale RADARSAT antenna which would be mounted on the STEP platform in the orbiter cargo bay such that it could be deployed and retracted in orbit (as shown in this figure). The antenna would be subjected to typical environmental exposures while an array of targets and sensors on the antenna support structure and reflecting surface are observed and monitored. In particular, the typical environments would include deployment and retraction, dynamic response to vehicle thruster or base exciter inputs, and thermal soak and transient effects upon entering or exiting Earth eclipse. The proposed experiment would also provide generic information on the properties of large space structures in space and on techniques to obtain the desired information.
Wind energy development: methods for assessing risks to birds and bats pre-construction
Katzner, Todd E.; Bennett, Victoria; Miller, Tricia A.; Duerr, Adam E.; Braham, Melissa A.; Hale, Amanda
2016-01-01
Wind power generation is rapidly expanding. Although wind power is a low-carbon source of energy, it can impact negatively birds and bats, either directly through fatality or indirectly by displacement or habitat loss. Pre-construction risk assessment at wind facilities within the United States is usually required only on public lands. When conducted, it generally involves a 3-tier process, with each step leading to more detailed and rigorous surveys. Preliminary site assessment (U.S. Fish and Wildlife Service, Tier 1) is usually conducted remotely and involves evaluation of existing databases and published materials. If potentially at-risk wildlife are present and the developer wishes to continue the development process, then on-site surveys are conducted (Tier 2) to verify the presence of those species and to assess site-specific features (e.g., topography, land cover) that may influence risk from turbines. The next step in the process (Tier 3) involves quantitative or scientific studies to assess the potential risk of the proposed project to wildlife. Typical Tier-3 research may involve acoustic, aural, observational, radar, capture, tracking, or modeling studies, all designed to understand details of risk to specific species or groups of species at the given site. Our review highlights several features lacking from many risk assessments, particularly the paucity of before-and-after-control- impact (BACI) studies involving modeling and a lack of understanding of cumulative effects of wind facilities on wildlife. Both are essential to understand effective designs for pre-construction monitoring and both would help expand risk assessment beyond eagles.
Norrelgen, Fritjof; Lilja, Anders; Ingvar, Martin; Gisselgård, Jens; Fransson, Peter
2012-01-01
Objective The aims of this study were to develop and assess a method to map language networks in children with two auditory fMRI protocols in combination with a dichotic listening task (DL). The method is intended for pediatric patients prior to epilepsy surgery. To evaluate the potential clinical usefulness of the method we first wanted to assess data from a group of healthy children. Methods In a first step language test materials were developed, intended for subsequent implementation in fMRI protocols. An evaluation of this material was done in 30 children with typical development, 10 from the 1st, 4th and the 7th grade, respectively. The language test material was then adapted and implemented in two fMRI protocols intended to target frontal and posterior language networks. In a second step language lateralization was assessed in 17 typical 10–11 year olds with fMRI and DL. To reach a conclusion about language lateralization, firstly, quantitative analyses of the index data from the two fMRI tasks and the index data from the DL task were done separately. In a second step a set of criteria were applied to these results to reach a conclusion about language lateralization. The steps of these analyses are described in detail. Results The behavioral assessment of the language test material showed that it was well suited for typical children. The results of the language lateralization assessments, based on fMRI data and DL data, showed that for 15 of the 17 subjects (88%) a conclusion could be reached about hemispheric language dominance. In 2 cases (12%) DL provided critical data. Conclusions The employment of DL combined with language mapping using fMRI for assessing hemispheric language dominance is novel and it was deemed valuable since it provided additional information compared to the results gained from each method individually. PMID:23284796
Jiang, Canping; Flansburg, Lisa; Ghose, Sanchayita; Jorjorian, Paul; Shukla, Abhinav A
2010-12-15
The concept of design space has been taking root under the quality by design paradigm as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. This paper outlines the development of a design space for a hydrophobic interaction chromatography (HIC) process step. The design space included the impact of raw material lot-to-lot variability and variations in the feed stream from cell culture. A failure modes and effects analysis was employed as the basis for the process characterization exercise. During mapping of the process design space, the multi-dimensional combination of operational variables were studied to quantify the impact on process performance in terms of yield and product quality. Variability in resin hydrophobicity was found to have a significant influence on step yield and high-molecular weight aggregate clearance through the HIC step. A robust operating window was identified for this process step that enabled a higher step yield while ensuring acceptable product quality. © 2010 Wiley Periodicals, Inc.
Simplified signal processing for impedance spectroscopy with spectrally sparse sequences
NASA Astrophysics Data System (ADS)
Annus, P.; Land, R.; Reidla, M.; Ojarand, J.; Mughal, Y.; Min, M.
2013-04-01
Classical method for measurement of the electrical bio-impedance involves excitation with sinusoidal waveform. Sinusoidal excitation at fixed frequency points enables wide variety of signal processing options, most general of them being Fourier transform. Multiplication with two quadrature waveforms at desired frequency could be easily accomplished both in analogue and in digital domains, even simplest quadrature square waves can be considered, which reduces signal processing task in analogue domain to synchronous switching followed by low pass filter, and in digital domain requires only additions. So called spectrally sparse excitation sequences (SSS), which have been recently introduced into bio-impedance measurement domain, are very reasonable choice when simultaneous multifrequency excitation is required. They have many good properties, such as ease of generation and good crest factor compared to similar multisinusoids. Typically, the usage of discrete or fast Fourier transform in signal processing step is considered so far. Usage of simplified methods nevertheless would reduce computational burden, and enable simpler, less costly and less energy hungry signal processing platforms. Accuracy of the measurement with SSS excitation when using different waveforms for quadrature demodulation will be compared in order to evaluate the feasibility of the simplified signal processing. Sigma delta modulated sinusoid (binary signal) is considered to be a good alternative for a synchronous demodulation.
RoboPIV: how robotics enable PIV on a large industrial scale
NASA Astrophysics Data System (ADS)
Michaux, F.; Mattern, P.; Kallweit, S.
2018-07-01
This work demonstrates how the interaction between particle image velocimetry (PIV) and robotics can massively increase measurement efficiency. The interdisciplinary approach is shown using the complex example of an automated, large scale, industrial environment: a typical automotive wind tunnel application. Both the high degree of flexibility in choosing the measurement region and the complete automation of stereo PIV measurements are presented. The setup consists of a combination of three robots, individually used as a 6D traversing unit for the laser illumination system as well as for each of the two cameras. Synchronised movements in the same reference frame are realised through a master-slave setup with a single interface to the user. By integrating the interface into the standard wind tunnel management system, a single measurement plane or a predefined sequence of several planes can be requested through a single trigger event, providing the resulting vector fields within minutes. In this paper, a brief overview on the demands of large scale industrial PIV and the existing solutions is given. Afterwards, the concept of RoboPIV is introduced as a new approach. In a first step, the usability of a selection of commercially available robot arms is analysed. The challenges of pose uncertainty and importance of absolute accuracy are demonstrated through comparative measurements, explaining the individual pros and cons of the analysed systems. Subsequently, the advantage of integrating RoboPIV directly into the existing wind tunnel management system is shown on basis of a typical measurement sequence. In a final step, a practical measurement procedure, including post-processing, is given by using real data and results. Ultimately, the benefits of high automation are demonstrated, leading to a drastic reduction in necessary measurement time compared to non-automated systems, thus massively increasing the efficiency of PIV measurements.
Evaluating the benefits of digital pathology implementation: Time savings in laboratory logistics.
Baidoshvili, Alexi; Bucur, Anca; van Leeuwen, Jasper; van der Laak, Jeroen; Kluin, Philip; van Diest, Paul J
2018-06-20
The benefits of digital pathology for workflow improvement and thereby cost savings in pathology, at least partly outweighing investment costs, are increasingly recognized. Successful implementations in a variety of scenarios start to demonstrate cost benefits of digital pathology for both research and routine diagnostics, contributing to a sound business case encouraging further adoption. To further support new adopters, there is still a need for detailed assessment of the impact this technology has on the relevant pathology workflows with emphasis on time saving. To assess the impact of digital pathology adoption on logistic laboratory tasks (i.e. not including pathologists' time for diagnosis making) in LabPON, a large regional pathology laboratory in The Netherlands. To quantify the benefits of digitization we analyzed the differences between the traditional analog and new digital workflows, carried out detailed measurements of all relevant steps in key analog and digital processes, and compared time spent. We modeled and assessed the logistic savings in five workflows: (1) Routine diagnosis, (2) Multi-disciplinary meeting, (3) External revision requests, (4) Extra stainings and (5) External consultation. On average over 19 working hours were saved on a typical day by working digitally, with the highest savings in routine diagnosis and multi-disciplinary meeting workflows. By working digitally, a significant amount of time could be saved in a large regional pathology lab with a typical case mix. We also present the data in each workflow per task and concrete logistic steps to allow extrapolation to the context and case mix of other laboratories. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Reducing the computational footprint for real-time BCPNN learning
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618
Reducing the computational footprint for real-time BCPNN learning.
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
Mechanisms for regulating step length while running towards and over an obstacle.
Larsen, Roxanne J; Jackson, William H; Schmitt, Daniel
2016-10-01
The ability to run across uneven terrain with continuous stable movement is critical to the safety and efficiency of a runner. Successful step-to-step stabilization while running may be mediated by minor adjustments to a few key parameters (e.g., leg stiffness, step length, foot strike pattern). However, it is not known to what degree runners in relatively natural settings (e.g., trails, paved road, curbs) use the same strategies across multiple steps. This study investigates how three readily measurable running parameters - step length, foot placement, and foot strike pattern - are adjusted in response to encountering a typical urban obstacle - a sidewalk curb. Thirteen subjects were video-recorded as they ran at self-selected slow and fast paces. Runners targeted a specific distance before the curb for foot placement, and lengthened their step over the curb (p<0.0001) regardless of where the step over the curb was initiated. These strategies of adaptive locomotion disrupt step cycles temporarily, and may increase locomotor cost and muscle loading, but in the end assure dynamic stability and minimize the risk of injury over the duration of a run. Copyright © 2016 Elsevier B.V. All rights reserved.
Validation of the 1/12 degrees Arctic Cap Nowcast/Forecast System (ACNFS)
2010-11-04
IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos
Validation of the 1/12 deg Arctic Cap Nowcast/Forecast System (ACNFS)
2010-11-04
IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos
The Operator Guide: An Ambient Persuasive Interface in the Factory
NASA Astrophysics Data System (ADS)
Meschtscherjakov, Alexander; Reitberger, Wolfgang; Pöhr, Florian; Tscheligi, Manfred
In this paper we introduce the context of a semiconductor factory as a promising area for the application of innovative interaction approaches. In order to increase efficiency ambient persuasive interfaces, which influence the operators' behaviour to perform in an optimized way, could constitute a potential strategy. We present insights gained from qualitative studies conducted in a specific semiconductor factory and provide a description of typical work processes and already deployed interfaces in this context. These findings informed the design of a prototype of an ambient persuasive interface within this realm - the "Operator Guide". Its overall aim is to improve work efficiency, while still maintaining a minimal error rate. We provide a detailed description of the Operator Guide along with an outlook of the next steps within a user-centered design approach.
Segmentation of human brain using structural MRI.
Helms, Gunther
2016-04-01
Segmentation of human brain using structural MRI is a key step of processing in imaging neuroscience. The methods have undergone a rapid development in the past two decades and are now widely available. This non-technical review aims at providing an overview and basic understanding of the most common software. Starting with the basis of structural MRI contrast in brain and imaging protocols, the concepts of voxel-based and surface-based segmentation are discussed. Special emphasis is given to the typical contrast features and morphological constraints of cortical and sub-cortical grey matter. In addition to the use for voxel-based morphometry, basic applications in quantitative MRI, cortical thickness estimations, and atrophy measurements as well as assignment of cortical regions and deep brain nuclei are briefly discussed. Finally, some fields for clinical applications are given.
Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer
NASA Astrophysics Data System (ADS)
Schreier, Franz; García, Sebastián Gimeno
2013-05-01
Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.
Development of functional nano-particle layer for highly efficient OLED
NASA Astrophysics Data System (ADS)
Lee, Jae-Hyun; Kim, Min-Hoi; Choi, Haechul; Choi, Yoonseuk
2015-12-01
Organic light emitting diodes (OLEDs) are now widely commercialized in market due to many advantages such as possibility of making thin or flexible devices. Nevertheless there are still several things to obtain the high quality flexible OLEDs, one of the most important issues is the light extraction of the device. It is known that OLEDs have the typical light loss such as the waveguide loss, plasmon absorption loss and internal total reflection. In this paper, we demonstrate the one-step processed light scattering films with aluminum oxide nano-particles and polystyrene matrix composite to achieve highly efficient OLEDs. Optical characteristics and surface roughness of light scattering film was optimized by changing the mixing concentration of Al2O3 nano-particles and investigated with the atomic force microscopy and hazemeter, respectively.
Volgushev, Maxim; Malyshev, Aleksey; Balaban, Pavel; Chistiakova, Marina; Volgushev, Stanislav; Wolf, Fred
2008-04-09
The generation of action potentials (APs) is a key process in the operation of nerve cells and the communication between neurons. Action potentials in mammalian central neurons are characterized by an exceptionally fast onset dynamics, which differs from the typically slow and gradual onset dynamics seen in identified snail neurons. Here we describe a novel method of analysis which provides a quantitative measure of the onset dynamics of action potentials. This method captures the difference between the fast, step-like onset of APs in rat neocortical neurons and the gradual, exponential-like AP onset in identified snail neurons. The quantitative measure of the AP onset dynamics, provided by the method, allows us to perform quantitative analyses of factors influencing the dynamics.
Volgushev, Maxim; Malyshev, Aleksey; Balaban, Pavel; Chistiakova, Marina; Volgushev, Stanislav; Wolf, Fred
2008-01-01
The generation of action potentials (APs) is a key process in the operation of nerve cells and the communication between neurons. Action potentials in mammalian central neurons are characterized by an exceptionally fast onset dynamics, which differs from the typically slow and gradual onset dynamics seen in identified snail neurons. Here we describe a novel method of analysis which provides a quantitative measure of the onset dynamics of action potentials. This method captures the difference between the fast, step-like onset of APs in rat neocortical neurons and the gradual, exponential-like AP onset in identified snail neurons. The quantitative measure of the AP onset dynamics, provided by the method, allows us to perform quantitative analyses of factors influencing the dynamics. PMID:18398478
Organic electronics with polymer dielectrics on plastic substrates fabricated via transfer printing
NASA Astrophysics Data System (ADS)
Hines, Daniel R.
Printing methods are fast becoming important processing techniques for the fabrication of flexible electronics. Some goals for flexible electronics are to produce cheap, lightweight, disposable radio frequency identification (RFID) tags, very large flexible displays that can be produced in a roll-to-roll process and wearable electronics for both the clothing and medical industries. Such applications will require fabrication processes for the assembly of dissimilar materials onto a common substrate in ways that are compatible with organic and polymeric materials as well as traditional solid-state electronic materials. A transfer printing method has been developed with these goals and application in mind. This printing method relies primarily on differential adhesion where no chemical processing is performed on the device substrate. It is compatible with a wide variety of materials with each component printed in exactly the same way, thus avoiding any mixed processing steps on the device substrate. The adhesion requirements of one material printed onto a second are studied by measuring the surface energy of both materials and by surface treatments such as plasma exposure or the application of self-assembled monolayers (SAM). Transfer printing has been developed within the context of fabricating organic electronics onto plastic substrates because these materials introduce unique opportunities associated with processing conditions not typically required for traditional semiconducting materials. Compared to silicon, organic semiconductors are soft materials that require low temperature processing and are extremely sensitive to chemical processing and environmental contamination. The transfer printing process has been developed for the important and commonly used organic semiconducting materials, pentacene (Pn) and poly(3-hexylthiophene) (P3HT). A three-step printing process has been developed by which these materials are printed onto an electrode subassembly consisting of previously printed electrodes separated by a polymer dielectric layer all on a plastic substrate. These bottom contact, flexible organic thin-film transistors (OTFT) have been compared to unprinted (reference) devices consisting of top contact electrodes and a silicon dioxide dielectric layer on a silicon substrate. Printed Pn and P3HT TFTs have been shown to out-perform the reference devices. This enhancement has been attributed to an annealing under pressure of the organic semiconducting material.
Cherry Vogt, Kimberly S
2008-01-01
Many colonial organisms encrust surfaces with feeding and reproductive polyps connected by vascular stolons. Such colonies often show a dichotomy between runner-like forms, with widely spaced polyps and long stolon connections, and sheet-like forms, with closely spaced polyps and short stolon connections. Generative processes, such as rates of polyp initiation relative to rates of stolon elongation, are typically thought to underlie this dichotomy. Regressive processes, such as tissue regression and cell death, may also be relevant. In this context, we have recently characterized the process of stolon regression in a colonial cnidarian, Podocoryna carnea. Stolon regression occurs naturally in these colonies. To characterize this process in detail, high levels of stolon regression were induced in experimental colonies by treatment with reactive oxygen and reactive nitrogen species (ROS and RNS). Either treatment results in stolon regression and is accompanied by high levels of endogenous ROS and RNS as well as morphological indications of cell death in the regressing stolon. The initiating step in regression appears to be a perturbation of normal colony-wide gastrovascular flow. This suggests more general connections between stolon regression and a wide variety of environmental effects. Here we summarize our results and further discuss such connections. PMID:19704785
Mn-doping-induced photocatalytic activity enhancement of ZnO nanorods prepared on glass substrates
NASA Astrophysics Data System (ADS)
Putri, Nur Ajrina; Fauzia, Vivi; Iwan, S.; Roza, Liszulfah; Umar, Akrajas Ali; Budi, Setia
2018-05-01
Mn-doped ZnO nanorods were synthesized on glass substrates via a two-steps process of ultrasonic spray pyrolysis and hydrothermal methods with four different concentrations Mn-doping (0, 1, 3, and 7 mol%). Introduction of Mn into ZnO is known could enhance the photocatalytic activity owing to the increase in the defect sites that effectively suppress the recombination of free electrons and holes. In this study, results show that Mn-doping has effectively modified the nucleations and crystal growth of ZnO, as evidenced by the increasing in the diameter, height, and the number of nanorods per unit area, besides slightly reduced the band gap and increased the oxygen vacancy concentrations in the ZnO lattice. This condition has successfully multiplied the photocatalytic performance of the ZnO nanorods in the degradation of methylene blue (MB) compared to the undoped-ZnO sample where in the typical process the MB can be degraded approximately 77% within only 35 min under a UV light irradiation.
DNA-Templated Pd Conductive Metallic Nanowires
NASA Astrophysics Data System (ADS)
Nguyen, K.; Monteverde, M.; Lyonnais, S.; Campidelli, S.; Bourgoin, J.-Ph.; Filoramo, A.
2008-10-01
Because of its unique recognition properties, its size and the sub-nanometric resolution, DNA is of particular interest for positioning and organizing nanomaterials. However, in DNA-directed nanoelectronic it can be envisioned to use DNA not only as a positioning scaffold, but also as a support for the conducting element. To ensure this function a metallization process is necessary and among the various DNA metallization methods the Pd based ones are of particular interest for carbon nanotube transistor connections. In this field, the major drawback of the existing methods is the fast kinetics of the process which lead to a stochastic growth. Here, we present a novel approach to DNA Pd metalization where the DNA molecule is previously deposited on a dry substrate in a typical nanodevice configuration. In our approach the progressive growth of nanowires is achieved by the slow and selective precipitation of PdO, followed by a subsequent reduction step. Thanks to this strategy we fabricated homogeneous, continuous and conductive Pd nanowires on the DNA scaffolds of very thin diameter (20-25 nm).
Electrodeposition of Zn and Cu-Zn alloy from ZnO/CuO precursors in deep eutectic solvent
NASA Astrophysics Data System (ADS)
Xie, Xueliang; Zou, Xingli; Lu, Xionggang; Lu, Changyuan; Cheng, Hongwei; Xu, Qian; Zhou, Zhongfu
2016-11-01
The electrodeposition of Zn and Cu-Zn alloy has been investigated in choline chloride (ChCl)/urea (1:2 molar ratio) based deep eutectic solvent (DES). Cyclic voltammetry study demonstrates that the reduction of Zn(II) to Zn is a diffusion-controlled quasi-reversible, one-step, two electrons transfer process. Chronoamperometric investigation indicates that the electrodeposition of Zn on a Cu electrode typically involves three-dimensional instantaneous nucleation with diffusion-controlled growth process. Micro/nanostructured Zn films can be obtained by controlling the electrodeposition potential and temperature. The electrodeposited Zn crystals preferentially orient parallel to the (101) plane. The Zn films electrodeposited under more positive potentials and low temperatures exhibit improved corrosion resistance in 3 wt% NaCl solution. In addition, Cu-Zn alloy films have also been electrodeposited directly from CuO-ZnO precursors in ChCl/urea-based DES. The XRD analysis indicates that the phase composition of the electrodeposited Cu-Zn alloy depends on the electrodeposition potential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Due to the increase in the use of Coordinate Measuring Machines (CMMs) to measure fine details and complex geometries in manufacturing, many programs have been made to compile and analyze the data. These programs typically require extensive setup to determine the expected results in order to not only track the pass/fail of a dimension, but also to use statistical process control (SPC). These extra steps and setup times have been addressed through the CMM Data Analysis Tool, which only requires the output of the CMM to provide both pass/fail analysis on all parts run to the same inspection program asmore » well as provide graphs which help visualize where the part measures within the allowed tolerances. This provides feedback not only to the customer for approval of a part during development, but also to machining process engineers to identify when any dimension is drifting towards an out of tolerance condition during production. This program can handle hundreds of parts with complex dimensions and will provide an analysis within minutes.« less
Lithography-free glass surface modification by self-masking during dry etching
NASA Astrophysics Data System (ADS)
Hein, Eric; Fox, Dennis; Fouckhardt, Henning
2011-01-01
Glass surface morphologies with defined shapes and roughness are realized by a two-step lithography-free process: deposition of an ~10-nm-thin lithographically unstructured metallic layer onto the surface and reactive ion etching in an Ar/CF4 high-density plasma. Because of nucleation or coalescence, the metallic layer is laterally structured during its deposition. Its morphology exhibits islands with dimensions of several tens of nanometers. These metal spots cause a locally varying etch velocity of the glass substrate, which results in surface structuring. The glass surface gets increasingly rougher with further etching. The mechanism of self-masking results in the formation of surface structures with typical heights and lateral dimensions of several hundred nanometers. Several metals, such as Ag, Al, Au, Cu, In, and Ni, can be employed as the sacrificial layer in this technology. Choice of the process parameters allows for a multitude of different glass roughness morphologies with individual defined and dosed optical scattering.
Physicochemical structural changes of cellulosic substrates during enzymatic saccharification
Meng, Xianzhi; Yoo, Chang Geun; Li, Mi; ...
2016-12-30
Enzymatic hydrolysis represents one of the major steps and barriers in the commercialization process of converting cellulosic substrates into biofuels and other value added products. It is usually achieved by a synergistic action of enzyme mixture typically consisting of multiple enzymes such as glucanase, cellobiohydrolase and β-glucosidase with different mode of actions. Due to the innate biomass recalcitrance, enzymatic hydrolysis normally starts with an initial fast rate of hydrolysis followed by a rapid decrease of rate toward the end of hydrolysis. With majority of literature studies focusing on the effect of key substrate characteristics on the initial rate or finalmore » yield of enzymatic hydrolysis, information about physicochemical structural changes of cellulosic substrates during enzymatic hydrolysis is still quite limited. Consequently, what slows down the reaction rate toward the end of hydrolysis is not well understood. Lastly, this review highlights recent advances in understanding the structural changes of cellulosic substrates during the hydrolysis process, to better understand the fundamental mechanisms of enzymatic hydrolysis.« less
A novel application of artificial neural network for wind speed estimation
NASA Astrophysics Data System (ADS)
Fang, Da; Wang, Jianzhou
2017-05-01
Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.
Physicochemical structural changes of cellulosic substrates during enzymatic saccharification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xianzhi; Yoo, Chang Geun; Li, Mi
Enzymatic hydrolysis represents one of the major steps and barriers in the commercialization process of converting cellulosic substrates into biofuels and other value added products. It is usually achieved by a synergistic action of enzyme mixture typically consisting of multiple enzymes such as glucanase, cellobiohydrolase and β-glucosidase with different mode of actions. Due to the innate biomass recalcitrance, enzymatic hydrolysis normally starts with an initial fast rate of hydrolysis followed by a rapid decrease of rate toward the end of hydrolysis. With majority of literature studies focusing on the effect of key substrate characteristics on the initial rate or finalmore » yield of enzymatic hydrolysis, information about physicochemical structural changes of cellulosic substrates during enzymatic hydrolysis is still quite limited. Consequently, what slows down the reaction rate toward the end of hydrolysis is not well understood. Lastly, this review highlights recent advances in understanding the structural changes of cellulosic substrates during the hydrolysis process, to better understand the fundamental mechanisms of enzymatic hydrolysis.« less
Glass-based integrated optical splitters: engineering oriented research
NASA Astrophysics Data System (ADS)
Hao, Yinlei; Zheng, Weiwei; Yang, Jianyi; Jiang, Xiaoqing; Wang, Minghua
2010-10-01
Optical splitter is one of most typical device heavily demanded in implementation of Fiber To The Home (FTTH) system. Due to its compatibility with optical fibers, low propagation loss, flexibility, and most distinguishingly, potentially costeffectiveness, glass-based integrated optical splitters made by ion-exchange technology promise to be very attractive in application of optical communication networks. Aiming at integrated optical splitters applied in optical communication network, glass ion-exchange waveguide process is developed, which includes two steps: thermal salts ion-exchange and field-assisted ion-diffusion. By this process, high performance optical splitters are fabricated in specially melted glass substrate. Main performance parameters of these splitters, including maximum insertion loss (IL), polarization dependence loss (PDL), and IL uniformity are all in accordance with corresponding specifications in generic requirements for optic branching components (GR-1209-CORE). In this paper, glass based integrated optical splitters manufacturing is demonstrated, after which, engineering-oriented research work results on glass-based optical splitter are presented.
Rheology of corn stover slurries during fermentation to ethanol
NASA Astrophysics Data System (ADS)
Ghosh, Sanchari; Epps, Brenden; Lynd, Lee
2017-11-01
In typical processes that convert cellulosic biomass into ethanol fuel, solubilization of the biomass is carried out by saccharolytic enzymes; however, these enzymes require an expensive pretreatment step to make the biomass accessible for solubilization (and subsequent fermentation). We have proposed a potentially-less-expensive approach using the bacterium Clostridium thermocellum, which can initiate fermentation without pretreatment. Moreover, we have proposed a ``cotreatment'' process, in which fermentation and mechanical milling occur alternately so as to achieve the highest ethanol yield for the least milling energy input. In order to inform the energetic requirements of cotreatment, we experimentally characterized the rheological properties of corn stover slurries at various stages of fermentation. Results show that a corn stover slurry is a yield stress fluid, with shear thinning behavior well described by a power law model. Viscosity decreases dramatically upon fermentation, controlling for variables such as solids concentration and particle size distribution. To the authors' knowledge, this is the first study to characterize the changes in the physical properties of biomass during fermentation by a thermophilic bacterium.
Electrical Properties of Reactive Liquid Crystal Semiconductors
NASA Astrophysics Data System (ADS)
McCulloch, Iain; Coelle, Michael; Genevicius, Kristijonas; Hamilton, Rick; Heckmeier, Michael; Heeney, Martin; Kreouzis, Theo; Shkunov, Maxim; Zhang, Weimin
2008-01-01
Fabrication of display products by low cost printing technologies such as ink jet, gravure offset lithography and flexography requires solution processable semiconductors for the backplane electronics. The products will typically be of lower performance than polysilicon transistors, but comparable to amorphous silicon. A range of prototypes are under development, including rollable electrophoretic displays, active matrix liquid crystal displays (AMLCD's), and flexible organic light-emitting diode (OLED) displays. Organic semiconductors that offer both electrical performance and stability with respect to storage and operation under ambient conditions are required. This work describes the initial evaluation of reactive mesogen semiconductors, which can polymerise within mesophase temperatures, “freezing in” the order in crosslinked domains. These crosslinked domains offer mechanical stability and are inert to solvent exposure in further processing steps. Reactive mesogens containing conjugated aromatic cores, designed to facilitate charge transport and provide good oxidative stability, were prepared and their liquid crystalline properties evaluated. Both time-of-flight and field effect transistor devices were prepared and their electrical characterisation reported.
Wang, Tieyu; Zhou, Yunqiao; Bi, Cencen; Lu, Yonglong; He, Guizhen; Giesy, John P
2017-07-01
There is a need to formulate water environment standards (WESs) from the current water quality criteria (WQC) in China. To this end, we briefly summarize typical mechanisms applied in several countries with longer histories of developing WESs, and three limitations to formulating WESs in China were identified. After analyzing the feasibility factors including economic development, scientific support capability and environmental policies, we realized that China is still not ready for a complete change from its current nation-wide unified WES system to a local-standard-based system. Thus, we proposed a framework for transformation from WQC to WESs in China. The framework consists of three parts, including responsibilities, processes and policies. The responsibilities include research authorization, development of guidelines, and collection of information, at both national and local levels; the processes include four steps and an impact factor system to establish water quality standards; and the policies include seven specific proposals. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Jarujareet, Ungkarn; Amarit, Rattasart; Sumriddetchkajorn, Sarun
2016-11-01
Realizing that current microfluidic chip fabrication techniques are time consuming and labor intensive as well as always have material leftover after chip fabrication, this research work proposes an innovative approach for rapid microfluidic chip production. The key idea relies on a combination of a widely-used inkjet printing method and a heat-based polymer curing technique with an electronic-mechanical control, thus eliminating the need of masking and molds compared to typical microfluidic fabrication processes. In addition, as the appropriate amount of polymer is utilized during printing, there is much less amount of material wasted. Our inkjet-based microfluidic printer can print out the desired microfluidic chip pattern directly onto a heated glass surface, where the printed polymer is suddenly cured. Our proof-of-concept demonstration for widely-used single-flow channel, Y-junction, and T-junction microfluidic chips shows that the whole microfluidic chip fabrication process requires only 3 steps with a fabrication time of 6 minutes.
Processing of zero-derived words in English: an fMRI investigation.
Pliatsikas, Christos; Wheeldon, Linda; Lahiri, Aditi; Hansen, Peter C
2014-01-01
Derivational morphological processes allow us to create new words (e.g. punish (V) to noun (N) punishment) from base forms. The number of steps from the basic units to derived words often varies (e.g., nationality
Multilayer Composite Pressure Vessels
NASA Technical Reports Server (NTRS)
DeLay, Tom
2005-01-01
A method has been devised to enable the fabrication of lightweight pressure vessels from multilayer composite materials. This method is related to, but not the same as, the method described in gMaking a Metal- Lined Composite-Overwrapped Pressure Vessel h (MFS-31814), NASA Tech Briefs, Vol. 29, No. 3 (March 2005), page 59. The method is flexible in that it poses no major impediment to changes in tank design and is applicable to a wide range of tank sizes. The figure depicts a finished tank fabricated by this method, showing layers added at various stages of the fabrication process. In the first step of the process, a mandrel that defines the size and shape of the interior of the tank is machined from a polyurethane foam or other suitable lightweight tooling material. The mandrel is outfitted with metallic end fittings on a shaft. Each end fitting includes an outer flange that has a small step to accommodate a thin layer of graphite/epoxy or other suitable composite material. The outer surface of the mandrel (but not the fittings) is covered with a suitable release material. The composite material is filament- wound so as to cover the entire surface of the mandrel from the step on one end fitting to the step on the other end fitting. The composite material is then cured in place. The entire workpiece is cut in half in a plane perpendicular to the axis of symmetry at its mid-length point, yielding two composite-material half shells, each containing half of the foam mandrel. The halves of the mandrel are removed from within the composite shells, then the shells are reassembled and bonded together with a belly band of cured composite material. The resulting composite shell becomes a mandrel for the subsequent steps of the fabrication process and remains inside the final tank. The outer surface of the composite shell is covered with a layer of material designed to be impermeable by the pressurized fluid to be contained in the tank. A second step on the outer flange of each end fitting accommodates this layer. Depending on the application, this layer could be, for example, a layer of rubber, a polymer film, or an electrodeposited layer of metal. If the fluid to be contained in the tank is a gas, then the best permeation barrier is electrodeposited metal (typically copper or nickel), which can be effective at a thickness of as little as 0.005 in (.0.13 mm). The electrodeposited metal becomes molecularly bonded to the second step on each metallic end fitting. The permeation-barrier layer is covered with many layers of filament-wound composite material, which could be the same as, or different from, the composite material of the inner shell. Finally, the filament-wound composite material is cured in an ov
NASA Astrophysics Data System (ADS)
Wang, Minhuan; Feng, Yulin; Bian, Jiming; Liu, Hongzhu; Shi, Yantao
2018-01-01
The mesoscopic perovskite solar cells (M-PSCs) were synthesized with MAPbI3 perovskite layers as light harvesters, which were grown with one-step and two-step solution process, respectively. A comparative study was performed through the quantitative correlation of resulting device performance and the crystalline quality of perovskite layers. Comparing with the one-step counterpart, a pronounced improvement in the steady-state power conversion efficiencies (PCEs) by 56.86% was achieved with two-step process, which was mainly resulted from the significant enhancement in fill factor (FF) from 48% to 77% without sacrificing the open circuit voltage (Voc) and short circuit current (Jsc). The enhanced FF was attributed to the reduced non-radiative recombination channels due to the better crystalline quality and larger grain size with the two-step processed perovskite layer. Moreover, the superiority of two-step over one-step process was demonstrated with rather good reproducibility.
Real-time dynamics of typical and untypical states in nonintegrable systems
NASA Astrophysics Data System (ADS)
Richter, Jonas; Jin, Fengping; De Raedt, Hans; Michielsen, Kristel; Gemmer, Jochen; Steinigeweg, Robin
2018-05-01
Understanding (i) the emergence of diffusion from truly microscopic principles continues to be a major challenge in experimental and theoretical physics. At the same time, isolated quantum many-body systems have experienced an upsurge of interest in recent years. Since in such systems the realization of a proper initial state is the only possibility to induce a nonequilibrium process, understanding (ii) the largely unexplored role of the specific realization is vitally important. Our work reports a substantial step forward and tackles the two issues (i) and (ii) in the context of typicality, entanglement as well as integrability and nonintegrability. Specifically, we consider the spin-1/2 XXZ chain, where integrability can be broken due to an additional next-nearest neighbor interaction, and study the real-time and real-space dynamics of nonequilibrium magnetization profiles for a class of pure states. Summarizing our main results, we show that signatures of diffusion for strong interactions are equally pronounced for the integrable and nonintegrable case. In both cases, we further find a clear difference between the dynamics of states with and without internal randomness. We provide an explanation of this difference by a detailed analysis of the local density of states.