Effects of Vibrations on Metal Forming Process: Analytical Approach and Finite Element Simulations
NASA Astrophysics Data System (ADS)
Armaghan, Khan; Christophe, Giraud-Audine; Gabriel, Abba; Régis, Bigot
2011-01-01
Vibration assisted forming is one of the most recent and beneficial technique used to improve forming process. Effects of vibration on metal forming processes can be attributed to two causes. First, the volume effect links lowering of yield stress with the influence of vibration on the dislocation movement. Second, the surface effect explains lowering of the effective coefficient of friction by periodic reduction contact area. This work is related to vibration assisted forming process in viscoplastic domain. Impact of change in vibration waveform has been analyzed. For this purpose, two analytical models have been developed for two different types of vibration waveforms (sinusoidal and triangular). These models were developed on the basis of Slice method that is used to find out the required forming force for the process. Final relationships show that application of triangular waveform in forming process is more beneficial as compare to sinusoidal vibrations in terms of reduced forming force. Finite Element Method (FEM) based simulations were performed using Forge2008®and these confirmed the results of analytical models. The ratio of vibration speed to upper die speed is a critical factor in the reduction of the forming force.
Processing plutonium-contaminated soil on Johnston Atoll
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moroney, K.; Moroney, J. III; Turney, J.
1994-07-01
This article describes a cleanup project to process plutonium- and americium-contaminated soil on Johnston Atoll for volume reduction. Thermo Analytical`s (TMA`s) segmented gate system (SGS) for this remedial operation has been in successful on-site operation since 1992. Topics covered include the basis for development, a description of the Johnston Atoll; the significance of results; the benefits of the technology; applicability to other radiologically contaminated sites. 7 figs., 1 tab.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
NASA Astrophysics Data System (ADS)
Jayamani, E.; Perera, D. S.; Soon, K. H.; Bakri, M. K. B.
2017-04-01
A systematic method of material analysis aiming for fuel efficiency improvement with the utilization of natural fiber reinforced polymer matrix composites in the automobile industry is proposed. A multi-factor based decision criteria with Analytical Hierarchy Process (AHP) was used and executed through MATLAB to achieve improved fuel efficiency through the weight reduction of vehicular components by effective comparison between two engine hood designs. The reduction was simulated by utilizing natural fiber polymer composites with thermoplastic polypropylene (PP) as the matrix polymer and benchmarked against a synthetic based composite component. Results showed that PP with 35% of flax fiber loading achieved a 0.4% improvement in fuel efficiency, and it was the highest among the 27 candidate fibers.
On the analysis of the double Hopf bifurcation in machining processes via centre manifold reduction
NASA Astrophysics Data System (ADS)
Molnar, T. G.; Dombovari, Z.; Insperger, T.; Stepan, G.
2017-11-01
The single-degree-of-freedom model of orthogonal cutting is investigated to study machine tool vibrations in the vicinity of a double Hopf bifurcation point. Centre manifold reduction and normal form calculations are performed to investigate the long-term dynamics of the cutting process. The normal form of the four-dimensional centre subsystem is derived analytically, and the possible topologies in the infinite-dimensional phase space of the system are revealed. It is shown that bistable parameter regions exist where unstable periodic and, in certain cases, unstable quasi-periodic motions coexist with the equilibrium. Taking into account the non-smoothness caused by loss of contact between the tool and the workpiece, the boundary of the bistable region is also derived analytically. The results are verified by numerical continuation. The possibility of (transient) chaotic motions in the global non-smooth dynamics is shown.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
Risk-based evaluation of commercial motor vehicle roadside violations : process and results
DOT National Transportation Integrated Search
2010-09-01
This report provides an analytic framework for evaluating the Atlanta Congestion Reduction Demonstration (CRD) under the United States Department of Transportation (U.S. DOT) Urban Partnership Agreement (UPA) and CRD Programs. The Atlanta CRD project...
Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A
2007-01-01
The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.
Marine geodetic control for geoidal profile mapping across the Puerto Rican Trench
NASA Technical Reports Server (NTRS)
Fubara, D. M.; Mourad, A. G.
1975-01-01
A marine geodetic control was established for the northern end of the geoidal profile mapping experiment across the Puerto Rican Trench by determining the three-dimensional geodetic coordinates of the four ocean-bottom mounted acoustic transponders. The data reduction techniques employed and analytical processes involved are described. Before applying the analytical techniques to the field data, they were tested with simulated data and proven to be effective in theory as well as in practice.
Zganiacz, Felicity; Wills, Ron B. H.; Mukhopadhyay, Soumi Paul; Arcot, Jayashree; Greenfield, Heather
2017-01-01
The objective of this study was to obtain analytical data on the sodium content of a range of processed foods and compare the levels obtained with their label claims and with published data of the same or equivalent processed foods in the 1980s and 1990s to investigate the extent of any change in sodium content in relation to reformulation targets. The sodium contents of 130 Australian processed foods were obtained by inductively coupled plasma optical emission spectrometry (ICP-OES) analysis and compared with previously published data. The sodium content between 1980 and 2013 across all products and by each product category were compared. There was a significant overall sodium reduction of 23%, 181 mg/100 g (p <0.001, 95% CI (Confidence Interval), 90 to 272 mg/100 g), in Australian processed foods since 1980, with a 12% (83 mg/100 g) reduction over the last 18 years. The sodium content of convenience foods (p < 0.001, 95% CI, 94 to 291 mg/100 g) and snack foods (p = 0.017, 95% CI, 44 to 398 mg/100 g) had declined significantly since 1980. Meanwhile, the sodium contents of processed meats (p = 0.655, 95% CI, −121 to 190) and bread and other bakery products (p = 0.115, 95% CI, −22 to 192) had decreased, though not significantly. Conversely, the sodium content of cheese (p = 0.781, 95% CI, −484 to 369 mg/100 g) had increased but also not significantly. Of the 130 products analysed, 62% met Australian reformulation targets. Sodium contents of the processed foods and the overall changes in comparison with previous data indicate a decrease over the 33 years period and suggest that the Australian recommended reformulation targets have been effective. Further sodium reduction of processed foods is still required and continuous monitoring of the reduction of sodium levels in processed foods is needed. PMID:28505147
Zganiacz, Felicity; Wills, Ron B H; Mukhopadhyay, Soumi Paul; Arcot, Jayashree; Greenfield, Heather
2017-05-15
The objective of this study was to obtain analytical data on the sodium content of a range of processed foods and compare the levels obtained with their label claims and with published data of the same or equivalent processed foods in the 1980s and 1990s to investigate the extent of any change in sodium content in relation to reformulation targets. The sodium contents of 130 Australian processed foods were obtained by inductively coupled plasma optical emission spectrometry (ICP-OES) analysis and compared with previously published data. The sodium content between 1980 and 2013 across all products and by each product category were compared. There was a significant overall sodium reduction of 23%, 181 mg/100 g ( p <0.001, 95% CI (Confidence Interval), 90 to 272 mg/100 g), in Australian processed foods since 1980, with a 12% (83 mg/100 g) reduction over the last 18 years. The sodium content of convenience foods ( p < 0.001, 95% CI, 94 to 291 mg/100 g) and snack foods ( p = 0.017, 95% CI, 44 to 398 mg/100 g) had declined significantly since 1980. Meanwhile, the sodium contents of processed meats ( p = 0.655, 95% CI, -121 to 190) and bread and other bakery products ( p = 0.115, 95% CI, -22 to 192) had decreased, though not significantly. Conversely, the sodium content of cheese ( p = 0.781, 95% CI, -484 to 369 mg/100 g) had increased but also not significantly. Of the 130 products analysed, 62% met Australian reformulation targets. Sodium contents of the processed foods and the overall changes in comparison with previous data indicate a decrease over the 33 years period and suggest that the Australian recommended reformulation targets have been effective. Further sodium reduction of processed foods is still required and continuous monitoring of the reduction of sodium levels in processed foods is needed.
Research on the use of space resources
NASA Technical Reports Server (NTRS)
Carroll, W. F. (Editor)
1983-01-01
The second year of a multiyear research program on the processing and use of extraterrestrial resources is covered. The research tasks included: (1) silicate processing, (2) magma electrolysis, (3) vapor phase reduction, and (4) metals separation. Concomitant studies included: (1) energy systems, (2) transportation systems, (3) utilization analysis, and (4) resource exploration missions. Emphasis in fiscal year 1982 was placed on the magma electrolysis and vapor phase reduction processes (both analytical and experimental) for separation of oxygen and metals from lunar regolith. The early experimental work on magma electrolysis resulted in gram quantities of iron (mixed metals) and the identification of significant anode, cathode, and container problems. In the vapor phase reduction tasks a detailed analysis of various process concepts led to the selection of two specific processes designated as ""Vapor Separation'' and ""Selective Ionization.'' Experimental work was deferred to fiscal year 1983. In the Silicate Processing task a thermophysical model of the casting process was developed and used to study the effect of variations in material properties on the cooling behavior of lunar basalt.
Risk analysis by FMEA as an element of analytical validation.
van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M
2009-12-05
We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.
Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.
Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok
2015-01-01
Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.
Zimpl, Milan; Skopalova, Jana; Jirovsky, David; Bartak, Petr; Navratil, Tomas; Sedonikova, Jana; Kotoucek, Milan
2012-01-01
Derivatives of quinoxalin-2-one are interesting compounds with potential pharmacological activity. From this point of view, understanding of their electrochemical behavior is of great importance. In the present paper, a mechanism of electrochemical reduction of quinoxalin-2-one derivatives at mercury dropping electrode was proposed. Pyrazine ring was found to be the main electroactive center undergoing a pH-dependent two-electron reduction process. The molecule protonization of nitrogen in the position 4 precedes the electron acceptance forming a semiquinone radical intermediate which is relatively stable in acidic solutions. Its further reduction is manifested by separated current signal. A positive mesomeric effect of the nonprotonized amino group in the position 7 of the derivative III accelerates the semiquinone reduction yielding a single current wave. The suggested reaction mechanism was verified by means of direct current polarography, differential pulse, cyclic and elimination voltammetry, and coulometry with subsequent GC/MS analysis. The understanding of the mechanism was applied in developing of analytical method for the determination of the studied compounds. PMID:22666117
Sampling and sample processing in pesticide residue analysis.
Lehotay, Steven J; Cook, Jo Marie
2015-05-13
Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.
Guise, Andy; Horyniak, Danielle; Melo, Jason; McNeil, Ryan; Werb, Dan
2017-12-01
Understanding the experience of initiating injection drug use and its social contexts is crucial to inform efforts to prevent transitions into this mode of drug consumption and support harm reduction. We reviewed and synthesized existing qualitative scientific literature systematically to identify the socio-structural contexts for, and experiences of, the initiation of injection drug use. We searched six databases (Medline, Embase, PsychINFO, CINAHL, IBSS and SSCI) systematically, along with a manual search, including key journals and subject experts. Peer-reviewed studies were included if they qualitatively explored experiences of or socio-structural contexts for injection drug use initiation. A thematic synthesis approach was used to identify descriptive and analytical themes throughout studies. From 1731 initial results, 41 studies reporting data from 1996 participants were included. We developed eight descriptive themes and two analytical (higher-order) themes. The first analytical theme focused on injecting initiation resulting from a social process enabled and constrained by socio-structural factors: social networks and individual interactions, socialization into drug-using identities and choices enabled and constrained by social context all combine to produce processes of injection initiation. The second analytical theme addressed pathways that explore varying meanings attached to injection initiation and how they link to social context: seeking pleasure, responses to increasing tolerance to drugs, securing belonging and identity and coping with pain and trauma. Qualitative research shows that injection drug use initiation has varying and distinct meanings for individuals involved and is a dynamic process shaped by social and structural factors. Interventions should therefore respond to the socio-structural influences on injecting drug use initiation by seeking to modify the contexts for initiation, rather than solely prioritizing the reduction of individual harms through behavior change. © 2017 Society for the Study of Addiction.
Van Ham, Rita; Van Vaeck, Luc; Adams, Freddy C; Adriaens, Annemie
2004-05-01
The analytical use of mass spectra from static secondary ion mass spectrometry for the molecular identification of inorganic analytes in real life surface layers and microobjects requires an empirical insight in the signals to be expected from a given compound. A comprehensive database comprising over 50 salts has been assembled to complement prior data on oxides. The present study allows the systematic trends in the relationship between the detected signals and molecular composition of the analyte to be delineated. The mass spectra provide diagnostic information by means of atomic ions, structural fragments, molecular ions, and adduct ions of the analyte neutrals. The prediction of mass spectra from a given analyte must account for the charge state of the ions in the salt, the formation of oxide-type neutrals from oxy salts, and the occurrence of oxidation-reduction processes.
Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience
Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK
2015-01-01
Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569
Deriving Earth Science Data Analytics Tools/Techniques Requirements
NASA Astrophysics Data System (ADS)
Kempler, S. J.
2015-12-01
Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists. Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics tools/techniques requirements that would support specific ESDA type goals. Representative existing data analytics tools/techniques relevant to ESDA will also be addressed.
Deriving Earth Science Data Analytics Requirements
NASA Technical Reports Server (NTRS)
Kempler, Steven J.
2015-01-01
Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists.Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics toolstechniques requirements that would support specific ESDA type goals. Representative existing data analytics toolstechniques relevant to ESDA will also be addressed.
Ammar, Hafedh Belhadj; Brahim, Mabrouk Ben; Abdelhédi, Ridha; Samet, Youssef
2016-02-01
The performance of boron-doped diamond (BDD) electrode for the detection of metronidazole (MTZ) as the most important drug of the group of 5-nitroimidazole was proven using cyclic voltammetry (CV) and square wave voltammetry (SWV) techniques. A comparison study between BDD, glassy carbon and silver electrodes on the electrochemical response was carried out. The process is pH-dependent. In neutral and alkaline media, one irreversible reduction peak related to the hydroxylamine derivative formation was registered, involving a total of four electrons. In acidic medium, a prepeak appears probably related to the adsorption affinity of hydroxylamine at the electrode surface. The BDD electrode showed higher sensitivity and reproducibility analytical response, compared with the other electrodes. The higher reduction peak current was registered at pH11. Under optimal conditions, a linear analytical curve was obtained for the MTZ concentration in the range of 0.2-4.2μmolL(-1), with a detection limit of 0.065μmolL(-1). Copyright © 2015 Elsevier B.V. All rights reserved.
Lambertus, Gordon; Shi, Zhenqi; Forbes, Robert; Kramer, Timothy T; Doherty, Steven; Hermiller, James; Scully, Norma; Wong, Sze Wing; LaPack, Mark
2014-01-01
An on-line analytical method based on transmission near-infrared spectroscopy (NIRS) for the quantitative determination of water concentrations (in parts per million) was developed and applied to the manufacture of a pharmaceutical intermediate. Calibration models for water analysis, built at the development site and applied at the manufacturing site, were successfully demonstrated during six manufacturing runs at a 250-gallon scale. The water measurements will be used as a forward-processing control point following distillation of a toluene product solution prior to use in a Grignard reaction. The most significant impact of using this NIRS-based process analytical technology (PAT) to replace off-line measurements is the significant reduction in the risk of operator exposure through the elimination of sampling of a severely lachrymatory and mutagenic compound. The work described in this report illustrates the development effort from proof-of-concept phase to manufacturing implementation.
Investigation of the reduction process of dopamine using paired pulse voltammetry
Kim, Do Hyoung; Oh, Yoonbae; Shin, Hojin; Blaha, Charles D.; Bennet, Kevin E.; Lee, Kendall H.; Kim, In Young; Jang, Dong Pyo
2014-01-01
The oxidation of dopamine (DA) around +0.6V potential in anodic sweep and its reduction around −0.1V in cathodic sweep at a relatively fast scanning rate (300 V/s or greater) have been used for identification of DA oxidation in fast-scan cyclic voltammetry (FSCV). However, compared to the oxidation peak of DA, the reduction peak has not been fully examined in analytical studies, although it has been used as one of the representative features to identify DA. In this study, the reduction process of DA was investigated using paired pulse voltammetry (PPV), which consists of two identical triangle-shaped waveforms, separated by a short interval at the holding potential. Especially, the discrepancies between the magnitude of the oxidation and reduction peaks of DA were investigated based on three factors: (1) the instant desorption of the DA oxidation product (dopamine-o-quinone: DOQ) after production, (2) the effect of the holding potential on the reduction process, and (3) the rate-limited reduction process of DA. For the first test, the triangle waveform FSCV experiment was performed on DA with various scanrates (from 400 to 1000 V/s) and durations of switching potentials of the triangle waveform (from 0.0 to 6.0 ms) in order to vary the duration between the applied oxidation potential at +0.6V and the reduction potential at −0.2V. As a result, the ratio of reduction over oxidation peak current response decreased as the duration became longer. To evaluate the effect of holding potentials during the reduction process, FSCV experiments were conducted with holding potential from 0.0V to −0.8V. We found that more negative holding potentials lead to larger amount of reduction process. For evaluation of the rate-limited reduction process of DA, PPV with a 1Hz repetition rate and various delays (2, 8, 20, 40 and 80ms) between the paired scans were utilized to determine how much reduction process occurred during the holding potential (−0.4V). These tests showed that relatively large amounts of DOQ are reduced to DA during the holding potential. The rate-limited reduction process was also confirmed with the increase of reduction in a lower pH environment. In addition to the mechanism of the reduction process of DA, we found that the differences between the responses of primary and secondary pulses in PPV were mainly dependent on the rate-limited reduction process during the holding potential. In conclusion, the reduction process may be one of the important factors to be considered in the kinetic analysis of DA and other electroactive species in brain tissue and in the design of new types of waveform in FSCV. PMID:24926227
A review of blood sample handling and pre-processing for metabolomics studies.
Hernandes, Vinicius Veri; Barbas, Coral; Dudzik, Danuta
2017-09-01
Metabolomics has been found to be applicable to a wide range of clinical studies, bringing a new era for improving clinical diagnostics, early disease detection, therapy prediction and treatment efficiency monitoring. A major challenge in metabolomics, particularly untargeted studies, is the extremely diverse and complex nature of biological specimens. Despite great advances in the field there still exist fundamental needs for considering pre-analytical variability that can introduce bias to the subsequent analytical process and decrease the reliability of the results and moreover confound final research outcomes. Many researchers are mainly focused on the instrumental aspects of the biomarker discovery process, and sample related variables sometimes seem to be overlooked. To bridge the gap, critical information and standardized protocols regarding experimental design and sample handling and pre-processing are highly desired. Characterization of a range variation among sample collection methods is necessary to prevent results misinterpretation and to ensure that observed differences are not due to an experimental bias caused by inconsistencies in sample processing. Herein, a systematic discussion of pre-analytical variables affecting metabolomics studies based on blood derived samples is performed. Furthermore, we provide a set of recommendations concerning experimental design, collection, pre-processing procedures and storage conditions as a practical review that can guide and serve for the standardization of protocols and reduction of undesirable variation. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu
2016-08-02
The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.
Gutknecht, Mandy; Danner, Marion; Schaarschmidt, Marthe-Lisa; Gross, Christian; Augustin, Matthias
2018-02-15
To define treatment benefit, the Patient Benefit Index contains a weighting of patient-relevant treatment goals using the Patient Needs Questionnaire, which includes a 5-point Likert scale ranging from 0 ("not important at all") to 4 ("very important"). These treatment goals have been assigned to five health dimensions. The importance of each dimension can be derived by averaging the importance ratings on the Likert scales of associated treatment goals. As the use of a Likert scale does not allow for a relative assessment of importance, the objective of this study was to estimate relative importance weights for health dimensions and associated treatment goals in patients with psoriasis by using the analytic hierarchy process and to compare these weights with the weights resulting from the Patient Needs Questionnaire. Furthermore, patients' judgments on the difficulty of the methods were investigated. Dimensions of the Patient Benefit Index and their treatment goals were mapped into a hierarchy of criteria and sub-criteria to develop the analytic hierarchy process questionnaire. Adult patients with psoriasis starting a new anti-psoriatic therapy in the outpatient clinic of the Institute for Health Services Research in Dermatology and Nursing at the University Medical Center Hamburg (Germany) were recruited and completed both methods (analytic hierarchy process, Patient Needs Questionnaire). Ratings of treatment goals on the Likert scales (Patient Needs Questionnaire) were summarized within each dimension to assess the importance of the respective health dimension/criterion. Following the analytic hierarchy process approach, consistency in judgments was assessed using a standardized measurement (consistency ratio). At the analytic hierarchy process level of criteria, 78 of 140 patients achieved the accepted consistency. Using the analytic hierarchy process, the dimension "improvement of physical functioning" was most important, followed by "improvement of social functioning". Concerning the Patient Needs Questionnaire results, these dimensions were ranked in second and fifth position, whereas "strengthening of confidence in the therapy and in a possible healing" was ranked most important, which was least important in the analytic hierarchy process ranking. In both methods, "improvement of psychological well-being" and "reduction of impairments due to therapy" were equally ranked in positions three and four. In contrast to this, on the level of sub-criteria, predominantly a similar ranking of treatment goals could be observed between the analytic hierarchy process and the Patient Needs Questionnaire. From the patients' point of view, the Likert scales (Patient Needs Questionnaire) were easier to complete than the analytic hierarchy process pairwise comparisons. Patients with psoriasis assign different importance to health dimensions and associated treatment goals. In choosing a method to assess the importance of health dimensions and/or treatment goals, it needs to be considered that resulting importance weights may differ in dependence on the used method. However, in this study, observed discrepancies in importance weights of the health dimensions were most likely caused by the different methodological approaches focusing on treatment goals to assess the importance of health dimensions on the one hand (Patient Needs Questionnaire) or directly assessing health dimensions on the other hand (analytic hierarchy process).
Piva, Elisa; Tosato, Francesca; Plebani, Mario
2015-12-07
Most errors in laboratory medicine occur in the pre-analytical phase of the total testing process. Phlebotomy, a crucial step in the pre-analytical phase influencing laboratory results and patient outcome, calls for quality assurance procedures and automation in order to prevent errors and ensure patient safety. We compared the performance of a new small, automated device, the ProTube Inpeco, designed for use in phlebotomy with a complete traceability of the process, with a centralized automated system, BC ROBO. ProTube was used for 15,010 patients undergoing phlebotomy with 48,776 tubes being labeled. The mean time and standard deviation (SD) for blood sampling was 3:03 (min:sec; SD ± 1:24) when using ProTube, against 5:40 (min:sec; SD ± 1:57) when using BC ROBO. The mean number of patients per hour managed at each phlebotomy point was 16 ± 3 with ProTube, and 10 ± 2 with BC ROBO. No tubes were labeled erroneously or incorrectly, even if process failure occurred in 2.8% of cases when ProTube was used. Thanks to its cutting edge technology, the ProTube has many advantages over BC ROBO, above all in verifying patient identity, and in allowing a reduction in both identification error and tube mislabeling.
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
Earth Science Data Analytics: Preparing for Extracting Knowledge from Information
NASA Technical Reports Server (NTRS)
Kempler, Steven; Barbieri, Lindsay
2016-01-01
Data analytics is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. Data analytics is a broad term that includes data analysis, as well as an understanding of the cognitive processes an analyst uses to understand problems and explore data in meaningful ways. Analytics also include data extraction, transformation, and reduction, utilizing specific tools, techniques, and methods. Turning to data science, definitions of data science sound very similar to those of data analytics (which leads to a lot of the confusion between the two). But the skills needed for both, co-analyzing large amounts of heterogeneous data, understanding and utilizing relevant tools and techniques, and subject matter expertise, although similar, serve different purposes. Data Analytics takes on a practitioners approach to applying expertise and skills to solve issues and gain subject knowledge. Data Science, is more theoretical (research in itself) in nature, providing strategic actionable insights and new innovative methodologies. Earth Science Data Analytics (ESDA) is the process of examining, preparing, reducing, and analyzing large amounts of spatial (multi-dimensional), temporal, or spectral data using a variety of data types to uncover patterns, correlations and other information, to better understand our Earth. The large variety of datasets (temporal spatial differences, data types, formats, etc.) invite the need for data analytics skills that understand the science domain, and data preparation, reduction, and analysis techniques, from a practitioners point of view. The application of these skills to ESDA is the focus of this presentation. The Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster was created in recognition of the practical need to facilitate the co-analysis of large amounts of data and information for Earth science. Thus, from a to advance science point of view: On the continuum of ever evolving data management systems, we need to understand and develop ways that allow for the variety of data relationships to be examined, and information to be manipulated, such that knowledge can be enhanced, to facilitate science. Recognizing the importance and potential impacts of the unlimited ways to co-analyze heterogeneous datasets, now and especially in the future, one of the objectives of the ESDA cluster is to facilitate the preparation of individuals to understand and apply needed skills to Earth science data analytics. Pinpointing and communicating the needed skills and expertise is new, and not easy. Information technology is just beginning to provide the tools for advancing the analysis of heterogeneous datasets in a big way, thus, providing opportunity to discover unobvious scientific relationships, previously invisible to the science eye. And it is not easy It takes individuals, or teams of individuals, with just the right combination of skills to understand the data and develop the methods to glean knowledge out of data and information. In addition, whereas definitions of data science and big data are (more or less) available (summarized in Reference 5), Earth science data analytics is virtually ignored in the literature, (barring a few excellent sources).
Stochastic modelling of the hydrologic operation of rainwater harvesting systems
NASA Astrophysics Data System (ADS)
Guo, Rui; Guo, Yiping
2018-07-01
Rainwater harvesting (RWH) systems are an effective low impact development practice that provides both water supply and runoff reduction benefits. A stochastic modelling approach is proposed in this paper to quantify the water supply reliability and stormwater capture efficiency of RWH systems. The input rainfall series is represented as a marked Poisson process and two typical water use patterns are analytically described. The stochastic mass balance equation is solved analytically, and based on this, explicit expressions relating system performance to system characteristics are derived. The performances of a wide variety of RWH systems located in five representative climatic regions of the United States are examined using the newly derived analytical equations. Close agreements between analytical and continuous simulation results are shown for all the compared cases. In addition, an analytical equation is obtained expressing the required storage size as a function of the desired water supply reliability, average water use rate, as well as rainfall and catchment characteristics. The equations developed herein constitute a convenient and effective tool for sizing RWH systems and evaluating their performances.
Analytical and Experimental Investigation of Process Loads on Incremental Severe Plastic Deformation
NASA Astrophysics Data System (ADS)
Okan Görtan, Mehmet
2017-05-01
From the processing point of view, friction is a major problem in the severe plastic deformation (SPD) using equal channel angular pressing (ECAP) process. Incremental ECAP can be used in order to optimize frictional effects during SPD. A new incremental ECAP has been proposed recently. This new process called as equal channel angular swaging (ECAS) combines the conventional ECAP and the incremental bulk metal forming method rotary swaging. ECAS tool system consists of two dies with an angled channel that contains two shear zones. During ECAS process, two forming tool halves, which are concentrically arranged around the workpiece, perform high frequency radial movements with short strokes, while samples are pushed through these. The oscillation direction nearly coincides with the shearing direction in the workpiece. The most important advantages in comparison to conventional ECAP are a significant reduction in the forces in material feeding direction plus the potential to be extended to continuous processing. In the current study, the mechanics of the ECAS process is investigated using slip line field approach. An analytical model is developed to predict process loads. The proposed model is validated using experiments and FE simulations.
Augmented reality enabling intelligence exploitation at the edge
NASA Astrophysics Data System (ADS)
Kase, Sue E.; Roy, Heather; Bowman, Elizabeth K.; Patton, Debra
2015-05-01
Today's Warfighters need to make quick decisions while interacting in densely populated environments comprised of friendly, hostile, and neutral host nation locals. However, there is a gap in the real-time processing of big data streams for edge intelligence. We introduce a big data processing pipeline called ARTEA that ingests, monitors, and performs a variety of analytics including noise reduction, pattern identification, and trend and event detection in the context of an area of operations (AOR). Results of the analytics are presented to the Soldier via an augmented reality (AR) device Google Glass (Glass). Non-intrusive AR devices such as Glass can visually communicate contextually relevant alerts to the Soldier based on the current mission objectives, time, location, and observed or sensed activities. This real-time processing and AR presentation approach to knowledge discovery flattens the intelligence hierarchy enabling the edge Soldier to act as a vital and active participant in the analysis process. We report preliminary observations testing ARTEA and Glass in a document exploitation and person of interest scenario simulating edge Soldier participation in the intelligence process in disconnected deployment conditions.
Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava
2012-03-01
Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Evaluation of space shuttle main engine fluid dynamic frequency response characteristics
NASA Technical Reports Server (NTRS)
Gardner, T. G.
1980-01-01
In order to determine the POGO stability characteristics of the space shuttle main engine liquid oxygen (LOX) system, the fluid dynamic frequency response functions between elements in the SSME LOX system was evaluated, both analytically and experimentally. For the experimental data evaluation, a software package was written for the Hewlett-Packard 5451C Fourier analyzer. The POGO analysis software is documented and consists of five separate segments. Each segment is stored on the 5451C disc as an individual program and performs its own unique function. Two separate data reduction methods, a signal calibration, coherence or pulser signal based frequency response function blanking, and automatic plotting features are included in the program. The 5451C allows variable parameter transfer from program to program. This feature is used to advantage and requires only minimal user interface during the data reduction process. Experimental results are included and compared with the analytical predictions in order to adjust the general model and arrive at a realistic simulation of the POGO characteristics.
Brienza, Monica; Chiron, Serge
2017-06-01
An efficient chiral method-based using liquid chromatography-high resolution-mass spectrometry analytical method has been validated for the determination of climbazole (CBZ) enantiomers in wastewater and sludge with quantification limits below the 1 ng/L and 2 ng/g range, respectively. On the basis of this newly developed analytical method, the stereochemistry of CBZ was investigated over time in sludge biotic and sterile batch experiments under anoxic dark and light conditions and during wastewater biological treatment by subsurface flow constructed wetlands. CBZ stereoselective degradation was exclusively observed under biotic conditions, confirming the specificity of enantiomeric fraction variations to biodegradation processes. Abiotic CBZ enantiomerization was insignificant at circumneutral pH and CBZ was always biotransformed into CBZ-alcohol due to the specific and enantioselective reduction of the ketone function of CBZ into a secondary alcohol function. This transformation was almost quantitative and biodegradation gave good first order kinetic fit for both enantiomers. The possibility to apply the Rayleigh equation to enantioselective CBZ biodegradation processes was investigated. The results of enantiomeric enrichment allowed for a quantitative assessment of in situ biodegradation processes due to a good fit (R 2 > 0.96) of the anoxic/anaerobic CBZ biodegradation to the Rayleigh dependency in all the biotic microcosms and was also applied in subsurface flow constructed wetlands. This work extended the concept of applying the Rayleigh equation towards quantitative biodegradation assessment of organic contaminants to enantioselective processes operating under anoxic/anaerobic conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
A FMEA clinical laboratory case study: how to make problems and improvements measurable.
Capunzo, Mario; Cavallo, Pierpaolo; Boccia, Giovanni; Brunetti, Luigi; Pizzuti, Sante
2004-01-01
The authors have experimented the application of the Failure Mode and Effect Analysis (FMEA) technique in a clinical laboratory. FMEA technique allows: a) to evaluate and measure the hazards of a process malfunction, b) to decide where to execute improvement actions, and c) to measure the outcome of those actions. A small sample of analytes has been studied: there have been determined the causes of the possible malfunctions of the analytical process, calculating the risk probability index (RPI), with a value between 1 and 1,000. Only for the cases of RPI > 400, improvement actions have been implemented that allowed a reduction of RPI values between 25% to 70% with a costs increment of < 1%. FMEA technique can be applied to the processes of a clinical laboratory, even if of small dimensions, and offers a high potential of improvement. Nevertheless, such activity needs a thorough planning because it is complex, even if the laboratory already operates an ISO 9000 Quality Management System.
High Pressure Low NOx Emissions Research: Recent Progress at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Chi-Ming, Lee; Tacina, Kathleen M.; Wey, Changlie
2007-01-01
In collaboration with U.S. aircraft engine companies, NASA Glenn Research Center has contributed to the advancement of low emissions combustion systems. For the High Speed Research Program (HSR), a 90% reduction in nitrogen oxides (NOx) emissions (relative to the then-current state of the art) has been demonstrated in sector rig testing at General Electric Aircraft Engines (GEAE). For the Advanced Subsonic Technology Program (AST), a 50% reduction in NOx emissions relative to the 1996 International Civil Aviation Organization (ICAO) standards has been at demonstrated in sector rigs at both GEAE and Pratt & Whitney (P&W). During the Ultra Efficient Engine Technology Program (UEET), a 70% reduction in NOx emissions, relative to the 1996 ICAO standards, was achieved in sector rig testing at Glenn in the world class Advanced Subsonic Combustion Rig (ASCR) and at contractor facilities. Low NOx combustor development continues under the Fundamental Aeronautics Program. To achieve these reductions, experimental and analytical research has been conducted to advance the understanding of emissions formation in combustion processes. Lean direct injection (LDI) concept development uses advanced laser-based non-intrusive diagnostics and analytical work to complement the emissions measurements and to provide guidance for concept improvement. This paper describes emissions results from flametube tests of a 9- injection-point LDI fuel/air mixer tested at inlet pressures up to 5500 kPa. Sample results from CFD and laser diagnostics are also discussed.
NASA Glenn High Pressure Low NOx Emissions Research
NASA Technical Reports Server (NTRS)
Tacina, Kathleen M.; Wey, Changlie
2008-01-01
In collaboration with U.S. aircraft engine companies, NASA Glenn Research Center has contributed to the advancement of low emissions combustion systems. For the High Speed Research Program (HSR), a 90% reduction in nitrogen oxides (NOx) emissions (relative to the then-current state of the art) has been demonstrated in sector rig testing at General Electric Aircraft Engines (GEAE). For the Advanced Subsonic Technology Program (AST), a 50% reduction in NOx emissions relative to the 1996 International Civil Aviation Organization (ICAO) standards has been demonstrated in sector rigs at both GEAE and Pratt & Whitney (P&W). During the Ultra Efficient Engine Technology Program (UEET), a 70% reduction in NOx emissions, relative to the 1996 ICAO standards, was achieved in sector rig testing at Glenn in the world class Advanced Subsonic Combustion Rig (ASCR) and at contractor facilities. Low NOx combustor development continues under the Fundamental Aeronautics Program. To achieve these reductions, experimental and analytical research has been conducted to advance the understanding of emissions formation in combustion processes. Lean direct injection (LDI) concept development uses advanced laser-based non-intrusive diagnostics and analytical work to complement the emissions measurements and to provide guidance for concept improvement. This paper describes emissions results from flametube tests of a 9-injection-point LDI fuel/air mixer tested at inlet pressures up to 5500 kPa. Sample results from CFD and laser diagnostics are also discussed.
NASA Technical Reports Server (NTRS)
Christoffersen, R.; Dukes, C.; Keller, L. P.; Baragiola, R.
2012-01-01
Solar wind ions are capable of altering the sur-face chemistry of the lunar regolith by a number of mechanisms including preferential sputtering, radiation-enhanced diffusion and sputter erosion of space weathered surfaces containing pre-existing compositional profiles. We have previously reported in-situ ion irradiation experiments supported by X-ray photoelectron spectroscopy (XPS) and analytical TEM that show how solar ions potentially drive Fe and Ti reduction at the monolayer scale as well as the 10-100 nm depth scale in lunar soils [1]. Here we report experimental data on the effect of ion irradiation on the major element surface composition in a mature mare soil.
Shifting from Stewardship to Analytics of Massive Science Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Doyle, R.; Law, E.; Hughes, S.; Huang, T.; Mahabal, A.
2015-12-01
Currently, the analysis of large data collections is executed through traditional computational and data analysis approaches, which require users to bring data to their desktops and perform local data analysis. Data collection, archiving and analysis from future remote sensing missions, be it from earth science satellites, planetary robotic missions, or massive radio observatories may not scale as more capable instruments stress existing architectural approaches and systems due to more continuous data streams, data from multiple observational platforms, and measurements and models from different agencies. A new paradigm is needed in order to increase the productivity and effectiveness of scientific data analysis. This paradigm must recognize that architectural choices, data processing, management, analysis, etc are interrelated, and must be carefully coordinated in any system that aims to allow efficient, interactive scientific exploration and discovery to exploit massive data collections. Future observational systems, including satellite and airborne experiments, and research in climate modeling will significantly increase the size of the data requiring new methodological approaches towards data analytics where users can more effectively interact with the data and apply automated mechanisms for data reduction, reduction and fusion across these massive data repositories. This presentation will discuss architecture, use cases, and approaches for developing a big data analytics strategy across multiple science disciplines.
Rabiei, Arash; Sharifinik, Milad; Niazi, Ali; Hashemi, Abdolnabi; Ayatollahi, Shahab
2013-07-01
Microbial enhanced oil recovery (MEOR) refers to the process of using bacterial activities for more oil recovery from oil reservoirs mainly by interfacial tension reduction and wettability alteration mechanisms. Investigating the impact of these two mechanisms on enhanced oil recovery during MEOR process is the main objective of this work. Different analytical methods such as oil spreading and surface activity measurements were utilized to screen the biosurfactant-producing bacteria isolated from the brine of a specific oil reservoir located in the southwest of Iran. The isolates identified by 16S rDNA and biochemical analysis as Enterobacter cloacae (Persian Type Culture Collection (PTCC) 1798) and Enterobacter hormaechei (PTCC 1799) produce 1.53 g/l of biosurfactant. The produced biosurfactant caused substantial surface tension reduction of the growth medium and interfacial tension reduction between oil and brine to 31 and 3.2 mN/m from the original value of 72 and 29 mN/m, respectively. A novel set of core flooding tests, including in situ and ex situ scenarios, was designed to explore the potential of the isolated consortium as an agent for MEOR process. Besides, the individual effects of wettability alteration and IFT reduction on oil recovery efficiency by this process were investigated. The results show that the wettability alteration of the reservoir rock toward neutrally wet condition in the course of the adsorption of bacteria cells and biofilm formation are the dominant mechanisms on the improvement of oil recovery efficiency.
Modelling the aggregation process of cellular slime mold by the chemical attraction.
Atangana, Abdon; Vermeulen, P D
2014-01-01
We put into exercise a comparatively innovative analytical modus operandi, the homotopy decomposition method (HDM), for solving a system of nonlinear partial differential equations arising in an attractor one-dimensional Keller-Segel dynamics system. Numerical solutions are given and some properties show evidence of biologically practical reliance on the parameter values. The reliability of HDM and the reduction in computations give HDM a wider applicability.
Cognitive strategies in the mental rotation task revealed by EEG spectral power.
Gardony, Aaron L; Eddy, Marianna D; Brunyé, Tad T; Taylor, Holly A
2017-11-01
The classic mental rotation task (MRT; Shepard & Metzler, 1971) is commonly thought to measure mental rotation, a cognitive process involving covert simulation of motor rotation. Yet much research suggests that the MRT recruits both motor simulation and other analytic cognitive strategies that depend on visuospatial representation and visual working memory (WM). In the present study, we investigated cognitive strategies in the MRT using time-frequency analysis of EEG and independent component analysis. We scrutinized sensorimotor mu (µ) power reduction, associated with motor simulation, parietal alpha (pα) power reduction, associated with visuospatial representation, and frontal midline theta (fmθ) power enhancement, associated with WM maintenance and manipulation. µ power increased concomitant with increasing task difficulty, suggesting reduced use of motor simulation, while pα decreased and fmθ power increased, suggesting heightened use of visuospatial representation processing and WM, respectively. These findings suggest that MRT performance involves flexibly trading off between cognitive strategies, namely a motor simulation-based mental rotation strategy and WM-intensive analytic strategies based on task difficulty. Flexible cognitive strategy use may be a domain-general cognitive principle that underlies aptitude and spatial intelligence in a variety of cognitive domains. We close with discussion of the present study's implications as well as future directions. Published by Elsevier Inc.
Ito, Toshihiro; Kato, Tsuyoshi; Hasegawa, Makoto; Katayama, Hiroyuki; Ishii, Satoshi; Okabe, Satoshi; Sano, Daisuke
2016-12-01
The virus reduction efficiency of each unit process is commonly determined based on the ratio of virus concentration in influent to that in effluent of a unit, but the virus concentration in wastewater has often fallen below the analytical quantification limit, which does not allow us to calculate the concentration ratio at each sampling event. In this study, left-censored datasets of norovirus (genogroup I and II), and adenovirus were used to calculate the virus reduction efficiency in unit processes of secondary biological treatment and chlorine disinfection. Virus concentration in influent, effluent from the secondary treatment, and chlorine-disinfected effluent of four municipal wastewater treatment plants were analyzed by a quantitative polymerase chain reaction (PCR) approach, and the probabilistic distributions of log reduction (LR) were estimated by a Bayesian estimation algorithm. The mean values of LR in the secondary treatment units ranged from 0.9 and 2.2, whereas those in the free chlorine disinfection units were from -0.1 and 0.5. The LR value in the secondary treatment was virus type and unit process dependent, which raised the importance for accumulating the data of virus LR values applicable to the multiple-barrier system, which is a global concept of microbial risk management in wastewater reclamation and reuse.
Sarkozi, Laszlo; Simson, Elkin; Ramanathan, Lakshmi
2003-03-01
Thirty-six years of data and history of laboratory practice at our institution has enabled us to follow the effects of analytical automation, then recently pre-analytical and post-analytical automation on productivity, cost reduction and enhanced quality of service. In 1998, we began the operation of a pre- and post-analytical automation system (robotics), together with an advanced laboratory information system to process specimens prior to analysis, deliver them to various automated analytical instruments, specimen outlet racks and finally to refrigerated stockyards. By the end of 3 years of continuous operation, we compared the chemistry part of the system with the prior 33 years and quantitated the financial impact of the various stages of automation. Between 1965 and 2000, the Consumer Price Index increased by a factor of 5.5 in the United States. During the same 36 years, at our institution's Chemistry Department the productivity (indicated as the number of reported test results/employee/year) increased from 10,600 to 104,558 (9.3-fold). When expressed in constant 1965 dollars, the total cost per test decreased from 0.79 dollars to 0.15 dollars. Turnaround time for availability of results on patient units decreased to the extent that Stat specimens requiring a turnaround time of <1 h do not need to be separately prepared or prioritized on the system. Our experience shows that the introduction of a robotics system for perianalytical automation has brought a large improvement in productivity together with decreased operational cost. It enabled us to significantly increase our workload together with a reduction of personnel. In addition, stats are handled easily and there are benefits such as safer working conditions and improved sample identification, which are difficult to quantify at this stage.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
Cryogenic homogenization and sampling of heterogeneous multi-phase feedstock
Doyle, Glenn Michael; Ideker, Virgene Linda; Siegwarth, James David
2002-01-01
An apparatus and process for producing a homogeneous analytical sample from a heterogenous feedstock by: providing the mixed feedstock, reducing the temperature of the feedstock to a temperature below a critical temperature, reducing the size of the feedstock components, blending the reduced size feedstock to form a homogeneous mixture; and obtaining a representative sample of the homogeneous mixture. The size reduction and blending steps are performed at temperatures below the critical temperature in order to retain organic compounds in the form of solvents, oils, or liquids that may be adsorbed onto or absorbed into the solid components of the mixture, while also improving the efficiency of the size reduction. Preferably, the critical temperature is less than 77 K (-196.degree. C.). Further, with the process of this invention the representative sample may be maintained below the critical temperature until being analyzed.
Trojanowicz, Marek; Bobrowski, Krzysztof; Szostek, Bogdan; Bojanowska-Czajka, Anna; Szreder, Tomasz; Bartoszewicz, Iwona; Kulisa, Krzysztof
2018-01-15
The monitoring of Advanced Oxidation/Reduction Processes (AO/RPs) for the evaluation of the yield and mechanisms of decomposition of perfluorinated compounds (PFCs) is often a more difficult task than their determination in the environmental, biological or food samples with complex matrices. This is mostly due to the formation of hundreds, or even thousands, of both intermediate and final products. The considered AO/RPs, involving free radical reactions, include photolytic and photocatalytic processes, Fenton reactions, sonolysis, ozonation, application of ionizing radiation and several wet oxidation processes. The main attention is paid to the most commonly occurring PFCs in the environment, namely PFOA and PFOS. The most powerful and widely exploited method for this purpose is without a doubt LC/MS/MS, which allows the identification and trace quantitation of all species with detectability and resolution power depending on the particular instrumental configurations. The GC/MS is often employed for the monitoring of volatile fluorocarbons, confirming the formation of radicals in the processes of C‒C and C‒S bonds cleavage. For the direct monitoring of radicals participating in the reactions of PFCs decomposition, the molecular spectrophotometry is employed, especially electron paramagnetic resonance (EPR). The UV/Vis spectrophotometry as a detection method is of special importance in the evaluation of kinetics of radical reactions with the use of pulse radiolysis methods. The most commonly employed for the determination of the yield of mineralization of PFCs is ion-chromatography, but there is also potentiometry with ion-selective electrode and the measurements of general parameters such as Total Organic Carbon and Total Organic Fluoride. The presented review is based on about 100 original papers published in both analytical and environmental journals. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tarasov, V. N.; Boyarkina, I. V.
2017-06-01
Analytical calculation methods of dynamic processes of the self-propelled boom hydraulic machines working equipment are more preferable in comparison with numerical methods. The analytical research method of dynamic processes of the boom hydraulic machines working equipment by means of differential equations of acceleration and braking of the working equipment is proposed. The real control law of a hydraulic distributor electric spool is considered containing the linear law of the electric spool activation and stepped law of the electric spool deactivation. Dependences of dynamic processes of the working equipment on reduced mass, stiffness of hydraulic power cylinder, viscous drag coefficient, piston acceleration, pressure in hydraulic cylinders, inertia force are obtained. Definite recommendations relative to the reduction of dynamic loads, appearing during the working equipment control are considered as the research result. The nature and rate of parameter variations of the speed and piston acceleration dynamic process depend on the law of the ports opening and closure of the hydraulic distributor electric spool. Dynamic loads in the working equipment are decreased during a smooth linear activation of the hydraulic distributor electric spool.
Employment of adaptive learning techniques for the discrimination of acoustic emissions
NASA Astrophysics Data System (ADS)
Erkes, J. W.; McDonald, J. F.; Scarton, H. A.; Tam, K. C.; Kraft, R. P.
1983-11-01
The following aspects of this study on the discrimination of acoustic emissions (AE) were examined: (1) The analytical development and assessment of digital signal processing techniques for AE signal dereverberation, noise reduction, and source characterization; (2) The modeling and verification of some aspects of key selected techniques through a computer-based simulation; and (3) The study of signal propagation physics and their effect on received signal characteristics for relevant physical situations.
Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean
2014-01-01
Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT). © 2014 American Institute of Chemical Engineers.
Concurrence of big data analytics and healthcare: A systematic review.
Mehta, Nishita; Pandit, Anil
2018-06-01
The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of Big Data analytics in healthcare. This is because, the usability studies have considered only qualitative approach which describes potential benefits but does not take into account the quantitative study. Also, majority of the studies were from developed countries which brings out the need for promotion of research on Healthcare Big Data analytics in developing countries. Copyright © 2018 Elsevier B.V. All rights reserved.
Bashkireva, A S
2010-01-01
The comparative analysis of the aging process of population in the context of demographic transition was represented in this article. The values of the basic medico-demographic indices of aging population for Russia and developed countries were identified. The results of the United Nations forecasts, probabilistic prognosis of quantity and age-gender structure for the Russian population were analyzed. The state of demographic trouble in Russia was convincingly shown. Special attention was given to the examination of the demographic and professional risks of a reduction in the population at the working ages, to an increase in the demographic load on the labor forces. The need for further studies was proven, dedicated to the use of geroprotectors and contemporary gerontotechnologies as means and methods of the prevention of premature work ability reduction, retarding of the aging processes of the worker's organism, decrease in the level of mortality and increase in the professional longevity.
NASA Astrophysics Data System (ADS)
Jivkov, Venelin S.; Zahariev, Evtim V.
2016-12-01
The paper presents a geometrical approach to dynamics simulation of a rigid and flexible system, compiled of high speed rotating machine with eccentricity and considerable inertia and mass. The machine is mounted on a vertical flexible pillar with considerable height. The stiffness and damping of the column, as well as, of the rotor bearings and the shaft are taken into account. Non-stationary vibrations and transitional processes are analyzed. The major frequency and modal mode of the flexible column are used for analytical reduction of its mass, stiffness and damping properties. The rotor and the foundation are modelled as rigid bodies, while the flexibility of the bearings is estimated by experiments and the requirements of the manufacturer. The transition effects as a result of limited power are analyzed by asymptotic methods of averaging. Analytical expressions for the amplitudes and unstable vibrations throughout resonance are derived by quasi-static approach increasing and decreasing of the exciting frequency. Analytical functions give the possibility to analyze the influence of the design parameter of many structure applications as wind power generators, gas turbines, turbo-generators, and etc. A numerical procedure is applied to verify the effectiveness and precision of the simulation process. Nonlinear and transitional effects are analyzed and compared to the analytical results. External excitations, as wave propagation and earthquakes, are discussed. Finite elements in relative and absolute coordinates are applied to model the flexible column and the high speed rotating machine. Generalized Newton - Euler dynamics equations are used to derive the precise dynamics equations. Examples of simulation of the system vibrations and nonstationary behaviour are presented.
A variational method for analyzing limit cycle oscillations in stochastic hybrid systems
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; MacLaurin, James
2018-06-01
Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .
An analytical model to predict and minimize the residual stress of laser cladding process
NASA Astrophysics Data System (ADS)
Tamanna, N.; Crouch, R.; Kabir, I. R.; Naher, S.
2018-02-01
Laser cladding is one of the advanced thermal techniques used to repair or modify the surface properties of high-value components such as tools, military and aerospace parts. Unfortunately, tensile residual stresses generate in the thermally treated area of this process. This work focuses on to investigate the key factors for the formation of tensile residual stress and how to minimize it in the clad when using dissimilar substrate and clad materials. To predict the tensile residual stress, a one-dimensional analytical model has been adopted. Four cladding materials (Al2O3, TiC, TiO2, ZrO2) on the H13 tool steel substrate and a range of preheating temperatures of the substrate, from 300 to 1200 K, have been investigated. Thermal strain and Young's modulus are found to be the key factors of formation of tensile residual stresses. Additionally, it is found that using a preheating temperature of the substrate immediately before laser cladding showed the reduction of residual stress.
Innovating Big Data Computing Geoprocessing for Analysis of Engineered-Natural Systems
NASA Astrophysics Data System (ADS)
Rose, K.; Baker, V.; Bauer, J. R.; Vasylkivska, V.
2016-12-01
Big data computing and analytical techniques offer opportunities to improve predictions about subsurface systems while quantifying and characterizing associated uncertainties from these analyses. Spatial analysis, big data and otherwise, of subsurface natural and engineered systems are based on variable resolution, discontinuous, and often point-driven data to represent continuous phenomena. We will present examples from two spatio-temporal methods that have been adapted for use with big datasets and big data geo-processing capabilities. The first approach uses regional earthquake data to evaluate spatio-temporal trends associated with natural and induced seismicity. The second algorithm, the Variable Grid Method (VGM), is a flexible approach that presents spatial trends and patterns, such as those resulting from interpolation methods, while simultaneously visualizing and quantifying uncertainty in the underlying spatial datasets. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analyses to efficiently consume and utilize large geospatial data in these custom analytical algorithms through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom `Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation via custom multi-step Hadoop applications, performance benchmarking comparisons, and Hadoop-centric opportunities for greater parallelization of geospatial operations.
NASA Astrophysics Data System (ADS)
Chebotarev, Victor; Koroleva, Alla; Pirozhnikova, Anastasia
2017-10-01
Use of recuperator in heat producing plants for utilization of natural gas combustion products allows to achieve the saving of gas fuel and also provides for environmental sanitation. Decrease of the volumes of natural gas combustion due to utilization of heat provides not only for reduction of harmful agents in the combustion products discharged into the atmosphere, but also creates conditions for increase of energy saving in heating processes of heat producing plants due to air overheating in the recuperator. Grapho-analytical method of determination of energy saving and reduction of discharges of combustion products into the atmosphere is represented in the article. Multifunctional diagram is developed, allowing to determine simultaneously savings from reduction of volumes of natural gas combusted and from reduction of amounts of harmful agents in the combustion products discharged into the atmosphere. Calculation of natural gas economy for heat producing plant taking into consideration certain capacity is carried out.
NASA Astrophysics Data System (ADS)
Shin, Kyung-Hun; Park, Hyung-Il; Kim, Kwan-Ho; Jang, Seok-Myeong; Choi, Jang-Young
2017-05-01
The shape of the magnet is essential to the performance of a slotless permanent magnet linear synchronous machine (PMLSM) because it is directly related to desirable machine performance. This paper presents a reduction in the thrust ripple of a PMLSM through the use of arc-shaped magnets based on electromagnetic field theory. The magnetic field solutions were obtained by considering end effect using a magnetic vector potential and two-dimensional Cartesian coordinate system. The analytical solution of each subdomain (PM, air-gap, coil, and end region) is derived, and the field solution is obtained by applying the boundary and interface conditions between the subdomains. In particular, an analytical method was derived for the instantaneous thrust and thrust ripple reduction of a PMLSM with arc-shaped magnets. In order to demonstrate the validity of the analytical results, the back electromotive force results of a finite element analysis and experiment on the manufactured prototype model were compared. The optimal point for thrust ripple minimization is suggested.
Automation, consolidation, and integration in autoimmune diagnostics.
Tozzoli, Renato; D'Aurizio, Federica; Villalta, Danilo; Bizzaro, Nicola
2015-08-01
Over the past two decades, we have witnessed an extraordinary change in autoimmune diagnostics, characterized by the progressive evolution of analytical technologies, the availability of new tests, and the explosive growth of molecular biology and proteomics. Aside from these huge improvements, organizational changes have also occurred which brought about a more modern vision of the autoimmune laboratory. The introduction of automation (for harmonization of testing, reduction of human error, reduction of handling steps, increase of productivity, decrease of turnaround time, improvement of safety), consolidation (combining different analytical technologies or strategies on one instrument or on one group of connected instruments) and integration (linking analytical instruments or group of instruments with pre- and post-analytical devices) opened a new era in immunodiagnostics. In this article, we review the most important changes that have occurred in autoimmune diagnostics and present some models related to the introduction of automation in the autoimmunology laboratory, such as automated indirect immunofluorescence and changes in the two-step strategy for detection of autoantibodies; automated monoplex immunoassays and reduction of turnaround time; and automated multiplex immunoassays for autoantibody profiling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, F.H.; Borek, T.T.; Christopher, J.Z.
1997-12-01
Analytical and Process Chemistry (A&PC) support is essential to the high-level waste vitrification campaign at the West Valley Demonstration Project (WVDP). A&PC characterizes the waste, providing information necessary to formulate the recipe for the target radioactive glass product. High-level waste (HLW) samples are prepared and analyzed in the analytical cells (ACs) and Sample Storage Cell (SSC) on the third floor of the main plant. The high levels of radioactivity in the samples require handling them in the shielded cells with remote manipulators. The analytical hot cells and third floor laboratories were refurbished to ensure optimal uninterrupted operation during the vitrificationmore » campaign. New and modified instrumentation, tools, sample preparation and analysis techniques, and equipment and training were required for A&PC to support vitrification. Analytical Cell Mockup Units (ACMUs) were designed to facilitate method development, scientist and technician training, and planning for analytical process flow. The ACMUs were fabricated and installed to simulate the analytical cell environment and dimensions. New techniques, equipment, and tools could be evaluated m in the ACMUs without the consequences of generating or handling radioactive waste. Tools were fabricated, handling and disposal of wastes was addressed, and spatial arrangements for equipment were refined. As a result of the work at the ACMUs the remote preparation and analysis methods and the equipment and tools were ready for installation into the ACs and SSC m in July 1995. Before use m in the hot cells, all remote methods had been validated and four to eight technicians were trained on each. Fine tuning of the procedures has been ongoing at the ACs based on input from A&PC technicians. Working at the ACs presents greater challenges than had development at the ACMUs. The ACMU work and further refinements m in the ACs have resulted m in a reduction m in analysis turnaround time (TAT).« less
Silva, Raquel V S; Tessarolo, Nathalia S; Pereira, Vinícius B; Ximenes, Vitor L; Mendes, Fábio L; de Almeida, Marlon B B; Azevedo, Débora A
2017-03-01
The elucidation of bio-oil composition is important to evaluate the processes of biomass conversion and its upgrading, and to suggest the proper use for each sample. Comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry (GC×GC-TOFMS) is a widely applied analytical approach for bio-oil investigation due to the higher separation and resolution capacity from this technique. This work addresses the issue of analytical performance to assess the comprehensive characterization of real bio-oil samples via GC×GC-TOFMS. The approach was applied to the individual quantification of compounds of real thermal (PWT), catalytic process (CPO), and hydrodeoxygenation process (HDO) bio-oils. Quantification was performed with reliability using the analytical curves of oxygenated and hydrocarbon standards as well as the deuterated internal standards. The limit of quantification was set at 1ngµL -1 for major standards, except for hexanoic acid, which was set at 5ngµL -1 . The GC×GC-TOFMS method provided good precision (<10%) and excellent accuracy (recovery range of 70-130%) for the quantification of individual hydrocarbons and oxygenated compounds in real bio-oil samples. Sugars, furans, and alcohols appear as the major constituents of the PWT, CPO, and HDO samples, respectively. In order to obtain bio-oils with better quality, the catalytic pyrolysis process may be a better option than hydrogenation due to the effective reduction of oxygenated compound concentrations and the lower cost of the process, when hydrogen is not required to promote deoxygenation in the catalytic pyrolysis process. Copyright © 2016 Elsevier B.V. All rights reserved.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Reduction of Free Edge Peeling Stress of Laminated Composites Using Active Piezoelectric Layers
Huang, Bin; Kim, Heung Soo
2014-01-01
An analytical approach is proposed in the reduction of free edge peeling stresses of laminated composites using active piezoelectric layers. The approach is the extended Kantorovich method which is an iterative method. Multiterms of trial function are employed and governing equations are derived by taking the principle of complementary virtual work. The solutions are obtained by solving a generalized eigenvalue problem. By this approach, the stresses automatically satisfy not only the traction-free boundary conditions, but also the free edge boundary conditions. Through the iteration processes, the free edge stresses converge very quickly. It is found that the peeling stresses generated by mechanical loadings are significantly reduced by applying a proper electric field to the piezoelectric actuators. PMID:25025088
Designing Flavoprotein-GFP Fusion Probes for Analyte-Specific Ratiometric Fluorescence Imaging.
Hudson, Devin A; Caplan, Jeffrey L; Thorpe, Colin
2018-02-20
The development of genetically encoded fluorescent probes for analyte-specific imaging has revolutionized our understanding of intracellular processes. Current classes of intracellular probes depend on the selection of binding domains that either undergo conformational changes on analyte binding or can be linked to thiol redox chemistry. Here we have designed novel probes by fusing a flavoenzyme, whose fluorescence is quenched on reduction by the analyte of interest, with a GFP domain to allow for rapid and specific ratiometric sensing. Two flavoproteins, Escherichia coli thioredoxin reductase and Saccharomyces cerevisiae lipoamide dehydrogenase, were successfully developed into thioredoxin and NAD + /NADH specific probes, respectively, and their performance was evaluated in vitro and in vivo. A flow cell format, which allowed dynamic measurements, was utilized in both bacterial and mammalian systems. In E. coli the first reported intracellular steady-state of the cytoplasmic thioredoxin pool was measured. In HEK293T mammalian cells, the steady-state cytosolic ratio of NAD + /NADH induced by glucose was determined. These genetically encoded fluorescent constructs represent a modular approach to intracellular probe design that should extend the range of metabolites that can be quantitated in live cells.
Mass spectrometric methods for monitoring redox processes in electrochemical cells.
Oberacher, Herbert; Pitterl, Florian; Erb, Robert; Plattner, Sabine
2015-01-01
Electrochemistry (EC) is a mature scientific discipline aimed to study the movement of electrons in an oxidation-reduction reaction. EC covers techniques that use a measurement of potential, charge, or current to determine the concentration or the chemical reactivity of analytes. The electrical signal is directly converted into chemical information. For in-depth characterization of complex electrochemical reactions involving the formation of diverse intermediates, products and byproducts, EC is usually combined with other analytical techniques, and particularly the hyphenation of EC with mass spectrometry (MS) has found broad applicability. The analysis of gases and volatile intermediates and products formed at electrode surfaces is enabled by differential electrochemical mass spectrometry (DEMS). In DEMS an electrochemical cell is sampled with a membrane interface for electron ionization (EI)-MS. The chemical space amenable to EC/MS (i.e., bioorganic molecules including proteins, peptides, nucleic acids, and drugs) was significantly increased by employing electrospray ionization (ESI)-MS. In the simplest setup, the EC of the ESI process is used to analytical advantage. A limitation of this approach is, however, its inability to precisely control the electrochemical potential at the emitter electrode. Thus, particularly for studying mechanistic aspects of electrochemical processes, the hyphenation of discrete electrochemical cells with ESI-MS was found to be more appropriate. The analytical power of EC/ESI-MS can further be increased by integrating liquid chromatography (LC) as an additional dimension of separation. Chromatographic separation was found to be particularly useful to reduce the complexity of the sample submitted either to the EC cell or to ESI-MS. Thus, both EC/LC/ESI-MS and LC/EC/ESI-MS are common. © 2013 The Authors. Mass Spectrometry Reviews published by Wiley Periodicals, Inc.
Mass spectrometric methods for monitoring redox processes in electrochemical cells
Oberacher, Herbert; Pitterl, Florian; Erb, Robert; Plattner, Sabine
2015-01-01
Electrochemistry (EC) is a mature scientific discipline aimed to study the movement of electrons in an oxidation–reduction reaction. EC covers techniques that use a measurement of potential, charge, or current to determine the concentration or the chemical reactivity of analytes. The electrical signal is directly converted into chemical information. For in-depth characterization of complex electrochemical reactions involving the formation of diverse intermediates, products and byproducts, EC is usually combined with other analytical techniques, and particularly the hyphenation of EC with mass spectrometry (MS) has found broad applicability. The analysis of gases and volatile intermediates and products formed at electrode surfaces is enabled by differential electrochemical mass spectrometry (DEMS). In DEMS an electrochemical cell is sampled with a membrane interface for electron ionization (EI)-MS. The chemical space amenable to EC/MS (i.e., bioorganic molecules including proteins, peptides, nucleic acids, and drugs) was significantly increased by employing electrospray ionization (ESI)-MS. In the simplest setup, the EC of the ESI process is used to analytical advantage. A limitation of this approach is, however, its inability to precisely control the electrochemical potential at the emitter electrode. Thus, particularly for studying mechanistic aspects of electrochemical processes, the hyphenation of discrete electrochemical cells with ESI-MS was found to be more appropriate. The analytical power of EC/ESI-MS can further be increased by integrating liquid chromatography (LC) as an additional dimension of separation. Chromatographic separation was found to be particularly useful to reduce the complexity of the sample submitted either to the EC cell or to ESI-MS. Thus, both EC/LC/ESI-MS and LC/EC/ESI-MS are common. PMID:24338642
Single and Multiple Microphone Noise Reduction Strategies in Cochlear Implants
Azimi, Behnam; Hu, Yi; Friedland, David R.
2012-01-01
To restore hearing sensation, cochlear implants deliver electrical pulses to the auditory nerve by relying on sophisticated signal processing algorithms that convert acoustic inputs to electrical stimuli. Although individuals fitted with cochlear implants perform well in quiet, in the presence of background noise, the speech intelligibility of cochlear implant listeners is more susceptible to background noise than that of normal hearing listeners. Traditionally, to increase performance in noise, single-microphone noise reduction strategies have been used. More recently, a number of approaches have suggested that speech intelligibility in noise can be improved further by making use of two or more microphones, instead. Processing strategies based on multiple microphones can better exploit the spatial diversity of speech and noise because such strategies rely mostly on spatial information about the relative position of competing sound sources. In this article, we identify and elucidate the most significant theoretical aspects that underpin single- and multi-microphone noise reduction strategies for cochlear implants. More analytically, we focus on strategies of both types that have been shown to be promising for use in current-generation implant devices. We present data from past and more recent studies, and furthermore we outline the direction that future research in the area of noise reduction for cochlear implants could follow. PMID:22923425
Chein, Jason M; Schneider, Walter
2005-12-01
Functional magnetic resonance imaging and a meta-analysis of prior neuroimaging studies were used to characterize cortical changes resulting from extensive practice and to evaluate a dual-processing account of the neural mechanisms underlying human learning. Three core predictions of the dual processing theory are evaluated: 1) that practice elicits generalized reductions in regional activity by reducing the load on the cognitive control mechanisms that scaffold early learning; 2) that these control mechanisms are domain-general; and 3) that no separate processing pathway emerges as skill develops. To evaluate these predictions, a meta-analysis of prior neuroimaging studies and a within-subjects fMRI experiment contrasting unpracticed to practiced performance in a paired-associate task were conducted. The principal effect of practice was found to be a reduction in the extent and magnitude of activity in a cortical network spanning bilateral dorsal prefrontal, left ventral prefrontal, medial frontal (anterior cingulate), left insular, bilateral parietal, and occipito-temporal (fusiform) areas. These activity reductions are shown to occur in common regions across prior neuroimaging studies and for both verbal and nonverbal paired-associate learning in the present fMRI experiment. The implicated network of brain regions is interpreted as a domain-general system engaged specifically to support novice, but not practiced, performance.
Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris
2017-12-15
Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.
A pipette-based calibration system for fast-scan cyclic voltammetry with fast response times.
Ramsson, Eric S
2016-01-01
Fast-scan cyclic voltammetry (FSCV) is an electrochemical technique that utilizes the oxidation and/or reduction of an analyte of interest to infer rapid changes in concentrations. In order to calibrate the resulting oxidative or reductive current, known concentrations of an analyte must be introduced under controlled settings. Here, I describe a simple and cost-effective method, using a Petri dish and pipettes, for the calibration of carbon fiber microelectrodes (CFMs) using FSCV.
Using PAT to accelerate the transition to continuous API manufacturing.
Gouveia, Francisca F; Rahbek, Jesper P; Mortensen, Asmus R; Pedersen, Mette T; Felizardo, Pedro M; Bro, Rasmus; Mealy, Michael J
2017-01-01
Significant improvements can be realized by converting conventional batch processes into continuous ones. The main drivers include reduction of cost and waste, increased safety, and simpler scale-up and tech transfer activities. Re-designing the process layout offers the opportunity to incorporate a set of process analytical technologies (PAT) embraced in the Quality-by-Design (QbD) framework. These tools are used for process state estimation, providing enhanced understanding of the underlying variability in the process impacting quality and yield. This work describes a road map for identifying the best technology to speed-up the development of continuous processes while providing the basis for developing analytical methods for monitoring and controlling the continuous full-scale reaction. The suitability of in-line Raman, FT-infrared (FT-IR), and near-infrared (NIR) spectroscopy for real-time process monitoring was investigated in the production of 1-bromo-2-iodobenzene. The synthesis consists of three consecutive reaction steps including the formation of an unstable diazonium salt intermediate, which is critical to secure high yield and avoid formation of by-products. All spectroscopic methods were able to capture critical information related to the accumulation of the intermediate with very similar accuracy. NIR spectroscopy proved to be satisfactory in terms of performance, ease of installation, full-scale transferability, and stability to very adverse process conditions. As such, in-line NIR was selected to monitor the continuous full-scale production. The quantitative method was developed against theoretical concentration values of the intermediate since representative sampling for off-line reference analysis cannot be achieved. The rapid and reliable analytical system allowed the following: speeding up the design of the continuous process and a better understanding of the manufacturing requirements to ensure optimal yield and avoid unreacted raw materials and by-products in the continuous reactor effluent. Graphical Abstract Using PAT to accelerate the transition to continuous API manufacturing.
Carbon Dioxide Reduction Systems
NASA Technical Reports Server (NTRS)
Burghardt, Stanley I.; Chandler, Horace W.; Taylor, T. I.; Walden, George
1961-01-01
The Methoxy system for regenerating oxygen from carbon dioxide was studied. Experiments indicate that the reaction between carbon dioxide and hydrogen can be carried out with ease in an efficient manner and with excellent heat conservation. A small reactor capable of handling the C02 expired by three men has been built and operated. The decomposition of methane by therma1,arc and catalytic processes was studied. Both the arc and catalytic processes gave encouraging results with over 90 percent of the methane being decomposed to carbon and hydrogen in some of the catalytic processes. Control of the carbon deposition in both the catalytic and arc processes is of great importance to prevent catalyst deactivation and short circuiting of electrical equipment. Sensitive analytical techniques have been developed for all of the components present in the reactor effluent streams.
Lai, Samuel Kin-Man; Cheng, Yu-Hong; Tang, Ho-Wai; Ng, Kwan-Ming
2017-08-09
Systematically controlling heat transfer in the surface-assisted laser desorption/ionization (SALDI) process and thus enhancing the analytical performance of SALDI-MS remains a challenging task. In the current study, by tuning the metal contents of Ag-Au alloy nanoparticle substrates (AgNPs, Ag55Au45NPs, Ag15Au85NPs and AuNPs, ∅: ∼2.0 nm), it was found that both SALDI ion-desorption efficiency and heat transfer can be controlled in a wide range of laser fluence (21.3 mJ cm -2 to 125.9 mJ cm -2 ). It was discovered that ion detection sensitivity can be enhanced at any laser fluence by tuning up the Ag content of the alloy nanoparticle, whereas the extent of ion fragmentation can be reduced by tuning up the Au content. The enhancement effect of Ag content on ion desorption was found to be attributable to the increase in laser absorption efficiency (at 355 nm) with Ag content. Tuning the laser absorption efficiency by changing the metal composition was also effective in controlling the heat transfer from the NPs to the analytes. The laser-induced heating of Ag-rich alloy NPs could be balanced or even overridden by increasing the Au content of NPs, resulting in the reduction of the fragmentation of analytes. In the correlation of experimental measurement with molecular dynamics simulation, the effect of metal composition on the dynamics of the ion desorption process was also elucidated. Upon increasing the Ag content, it was also found that phase transition temperatures, such as melting, vaporization and phase explosion temperature, of NPs could be reduced. This further enhanced the desorption of analyte ions via phase-transition-driven desorption processes. The significant cooling effect on the analyte ions observed at high laser fluence was also determined to be originated from the phase explosion of the NPs. This study revealed that the development of alloy nanoparticles as SALDI substrates can constitute an effective means for the systematic control of ion-desorption efficiency and the extent of heat transfer, which could potentially enhance the analytical performance of SALDI-MS.
Kırış, Sevilay; Velioglu, Yakup Sedat
2016-01-01
The effects of different wash times (2 and 5 min) with tap and ozonated water on the removal of nine pesticides from olives and the transfer ratios of these pesticides during olive oil production were determined. The reliability of the analytical methods was also tested. The applied methods of analysis were found to be suitable based on linearity, trueness, repeatability, selectivity and limit of quantification all the pesticides tested. All tap and ozonated water wash cycles removed a significant quantity of the pesticides from the olives, with a few exceptions. Generally, extending the wash time increased the pesticide reduction with ozonated water, but did not make significant differences with tap water. During olive oil processing, depending on the processing technique and physicochemical properties of the pesticides, eight of nine pesticides were concentrated into olive oil (processing factor > 1) with almost no significant difference between treatments. Imidacloprid did not pass into olive oil. Ozonated water wash for 5 min reduced chlorpyrifos, β-cyfluthrin, α-cypermethrin and imidacloprid contents by 38%, 50%, 55% and 61% respectively in olives.
Maximum drag reduction asymptotes and the cross-over to the Newtonian plug
NASA Astrophysics Data System (ADS)
Benzi, R.; de Angelis, E.; L'Vov, V. S.; Procaccia, I.; Tiberkevich, V.
2006-03-01
We employ the full FENE-P model of the hydrodynamics of a dilute polymer solution to derive a theoretical approach to drag reduction in wall-bounded turbulence. We recapture the results of a recent simplified theory which derived the universal maximum drag reduction (MDR) asymptote, and complement that theory with a discussion of the cross-over from the MDR to the Newtonian plug when the drag reduction saturates. The FENE-P model gives rise to a rather complex theory due to the interaction of the velocity field with the polymeric conformation tensor, making analytic estimates quite taxing. To overcome this we develop the theory in a computer-assisted manner, checking at each point the analytic estimates by direct numerical simulations (DNS) of viscoelastic turbulence in a channel.
Anticipated and zero-lag synchronization in motifs of delay-coupled systems
NASA Astrophysics Data System (ADS)
Mirasso, Claudio R.; Carelli, Pedro V.; Pereira, Tiago; Matias, Fernanda S.; Copelli, Mauro
2017-11-01
Anticipated and zero-lag synchronization have been observed in different scientific fields. In the brain, they might play a fundamental role in information processing, temporal coding and spatial attention. Recent numerical work on anticipated and zero-lag synchronization studied the role of delays. However, an analytical understanding of the conditions for these phenomena remains elusive. In this paper, we study both phenomena in systems with small delays. By performing a phase reduction and studying phase locked solutions, we uncover the functional relation between the delay, excitation and inhibition for the onset of anticipated synchronization in a sender-receiver-interneuron motif. In the case of zero-lag synchronization in a chain motif, we determine the stability conditions. These analytical solutions provide an excellent prediction of the phase-locked regimes of Hodgkin-Huxley models and Roessler oscillators.
Mühlbacher, Axel C; Bethge, Susanne; Kaczynski, Anika
2016-01-01
Cardiovascular disease is one of the most common causes of death worldwide, with many individuals having experienced acute coronary syndrome (ACS). How patients with a history of ACS value aspects of their medical treatment have been evaluated rarely. The aim of this study was to determine patient priorities for long-term drug therapy after experiencing ACS. To identify patient-relevant treatment characteristics, a systematic literature review and qualitative patient interviews were conducted. A questionnaire was developed to elicit patient's priorities for different characteristics of ACS treatment using Analytic Hierarchy Process (AHP). To evaluate the patient-relevant outcomes, the eigenvector method was applied. Six-hundred twenty-three patients participated in the computer-assisted personal interviews and were included in the final analysis. Patients showed a clear priority for the attribute "reduction of mortality risk" (weight: 0.402). The second most preferred attribute was the "prevention of a new myocardial infarction" (weight: 0.272), followed by "side effect: dyspnea" (weight: 0.165) and "side effect: bleeding" (weight: 0.117). The "frequency of intake" was the least important attribute (weight: 0.044). In conclusion, this study shows that patients strongly value a reduction of the mortality risk in post-ACS treatment. Formal consideration of patient preferences and priorities can help to inform a patient-centered approach, clinical practice, development of future effective therapies, and health policy for decision makers that best represents the needs and goals of the patient.
Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors
Boukhayma, Assim; Peizerat, Arnaud; Enz, Christian
2016-01-01
This paper presents an overview of the read noise in CMOS image sensors (CISs) based on four-transistors (4T) pixels, column-level amplification and correlated multiple sampling. Starting from the input-referred noise analytical formula, process level optimizations, device choices and circuit techniques at the pixel and column level of the readout chain are derived and discussed. The noise reduction techniques that can be implemented at the column and pixel level are verified by transient noise simulations, measurement and results from recently-published low noise CIS. We show how recently-reported process refinement, leading to the reduction of the sense node capacitance, can be combined with an optimal in-pixel source follower design to reach a sub-0.3erms- read noise at room temperature. This paper also discusses the impact of technology scaling on the CIS read noise. It shows how designers can take advantage of scaling and how the Metal-Oxide-Semiconductor (MOS) transistor gate leakage tunneling current appears as a challenging limitation. For this purpose, both simulation results of the gate leakage current and 1/f noise data reported from different foundries and technology nodes are used.
Role of epistasis on the fixation probability of a non-mutator in an adapted asexual population.
James, Ananthu
2016-10-21
The mutation rate of a well adapted population is prone to reduction so as to have a lower mutational load. We aim to understand the role of epistatic interactions between the fitness affecting mutations in this process. Using a multitype branching process, the fixation probability of a single non-mutator emerging in a large asexual mutator population is analytically calculated here. The mutator population undergoes deleterious mutations at constant, but at a much higher rate than that of the non-mutator. We find that antagonistic epistasis lowers the chances of mutation rate reduction, while synergistic epistasis enhances it. Below a critical value of epistasis, the fixation probability behaves non-monotonically with variation in the mutation rate of the background population. Moreover, the variation of this critical value of the epistasis parameter with the strength of the mutator is discussed in the appendix. For synergistic epistasis, when selection is varied, the fixation probability reduces overall, with damped oscillations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions
NASA Astrophysics Data System (ADS)
Güler, Marifi
2017-10-01
Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.
J functions for the process ud→WA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardin, D. Yu., E-mail: bardin@nu.jinr.ru; Kalinovskaya, L. V., E-mail: kalinov@mail.cern.ch; Uglov, E. D., E-mail: e.uglov@gmail.com
In this paper we present a description of the universal approach for analytic calculations for a certain class of J functions for six topologies of the boxes for the process ud → WA. These functions J arise at the reduction of the infrared divergent box diagrams. The standard Passarino–Veltman reduction of the four-point box diagram with an internal photon line connecting two external lines on the mass shell leads to infrared-divergent and mass-singular D{sub 0} functions. In the system SANC a systematic procedure is adopted to separate both types of singularities into the simplest objects, namely C{sub 0} functions. Themore » functions J, in turn, are represented as certain linear combinations of the standard D{sub 0} and C{sub 0} functions. The subtracted J functions are free of both types of singularities and are expressed as explicit and compact linear combinations of dilogarithm functions. We present extensive comparisons of numerical results of SANC with those obtained with the aid of the LoopTools package.« less
FMEA: a model for reducing medical errors.
Chiozza, Maria Laura; Ponzetti, Clemente
2009-06-01
Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).
ERIC Educational Resources Information Center
Bascia, Nina; Faubert, Brenton
2012-01-01
This article reviews the literature base on class size reduction and proposes a new analytic framework that we believe provides practically useful explanations of how primary class size reduction works. It presents descriptions of classroom practice and grounded explanations for how class size reduction affects educational core activities by…
An Analysis of Earth Science Data Analytics Use Cases
NASA Technical Reports Server (NTRS)
Shie, Chung-Lin; Kempler, Steve
2014-01-01
The increase in the number and volume, and sources, of globally available Earth science data measurements and datasets have afforded Earth scientists and applications researchers unprecedented opportunities to study our Earth in ever more sophisticated ways. In fact, the NASA Earth Observing System Data Information System (EOSDIS) archives have doubled from 2007 to 2014, to 9.1 PB (Ramapriyan, 2009; and https:earthdata.nasa.govaboutsystem-- performance). In addition, other US agency, international programs, field experiments, ground stations, and citizen scientists provide a plethora of additional sources for studying Earth. Co--analyzing huge amounts of heterogeneous data to glean out unobvious information is a daunting task. Earth science data analytics (ESDA) is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. It can include Data Preparation, Data Reduction, and Data Analysis. Through work associated with the Earth Science Information Partners (ESIP) Federation, a collection of Earth science data analytics use cases have been collected and analyzed for the purpose of extracting the types of Earth science data analytics employed, and requirements for data analytics tools and techniques yet to be implemented, based on use case needs. ESIP generated use case template, ESDA use cases, use case types, and preliminary use case analysis (this is a work in progress) will be presented.
Laser modification of graphene oxide layers
NASA Astrophysics Data System (ADS)
Malinský, Petr; Macková, Anna; Cutroneo, Mariapompea; Siegel, Jakub; Bohačová, Marie; Klímova, Kateřina; Švorčík, Václav; Sofer, Zdenĕk
2018-01-01
The effect of linearly polarized laser irradiation with various energy densities was successfully used for reduction of graphene oxide (GO). The ion beam analytical methods (RBS, ERDA) were used to follow the elemental composition which is expected as the consequence of GO reduction. The chemical composition analysis was accompanied by structural study showing changed functionalities in the irradiated GO foils using spectroscopy techniques including XPS, FTIR and Raman spectroscopy. The AFM was employed to identify the surface morphology and electric properties evolution were subsequently studied using standard two point method measurement. The used analytical methods report on reduction of irradiated graphene oxide on the surface and the decrease of surface resistivity as a growing function of the laser beam energy density.
ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction
Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo
2015-01-01
By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302
Lei, Zirong; Chen, Luqiong; Hu, Kan; Yang, Shengchun; Wen, Xiaodong
2018-06-05
Cold vapor generation (CVG) of cadmium was firstly accomplished in non-aqueous media by using solid reductant of potassium borohydride (KBH 4 ) as a derivation reagent. The mixture of surfactant Triton X-114 micelle and octanol was innovatively used as the non-aqueous media for the CVG and atomic fluorescence spectrometry (AFS) was used for the elemental determination. The analyte ions were firstly extracted into the non-aqueous media from the bulk aqueous phase of analyte/sample solution via a novelly established ultrasound-assisted rapidly synergistic cloud point extraction (UARS-CPE) process and then directly mixed with the solid redcutant KBH 4 to generate volatile elemental state cadmium in a specially designed reactor, which was then rapidly transported to a commercial atomic fluorescence spectrometer for detection. Under the optimal conditions, the limit of detection (LOD) for cadmium was 0.004 μg L -1 . Compared to conventional hydride generation (HG)-AFS, the efficiency of non-aqueous phase CVG and the analytical performance of the developed system was considerably improved. Copyright © 2018 Elsevier B.V. All rights reserved.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric
2014-03-01
Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.
Effects of Drought Manipulation on Soil Nitrogen Cycling: A Meta-Analysis
NASA Astrophysics Data System (ADS)
Homyak, Peter M.; Allison, Steven D.; Huxman, Travis E.; Goulden, Michael L.; Treseder, Kathleen K.
2017-12-01
Many regions on Earth are expected to become drier with climate change, which may impact nitrogen (N) cycling rates and availability. We used a meta-analytical approach on the results of field experiments that reduced precipitation and measured N supply (i.e., indices of N mineralization), soil microbial biomass, inorganic N pools (ammonium (NH4+) and nitrate (NO3-)), and nitrous oxide (N2O) emissions. We hypothesized that N supply and N2O emissions would be relatively insensitive to precipitation reduction and that reducing precipitation would increase extractable NH4+ and NO3- concentrations because microbial processes continue, whereas plant N uptake diminishes with drought. In support of this hypothesis, extractable NH4+ increased by 25% overall with precipitation reduction; NH4+ also increased significantly with increasing magnitude of precipitation reduction. In contrast, N supply and extractable NO3- did not change and N2O emissions decreased with reduced precipitation. Across studies microbial biomass appeared unchanged, yet from the diversity of studies, it was clear that proportionally smaller precipitation reductions increased microbial biomass, whereas larger proportional reductions in rainfall reduced microbial biomass; there was a positive intercept (P = 0.005) and a significant negative slope (P = 0.0002) for the regression of microbial biomass versus % precipitation reduction (LnR = -0.009 × (% precipitation reduction) + 0.4021). Our analyses imply that relative to other N variables, N supply is less sensitive to reduced precipitation, whereas processes producing N2O decline. Drought intensity and duration, through sustained N supply, may control how much N becomes vulnerable to loss via hydrologic and gaseous pathways upon rewetting dry soils.
Recovery of viruses from water by a modified flocculation procedure for second-step concentration.
Dahling, D R; Wright, B A
1986-01-01
A reduction in virus recovery efficiencies stemming from a change in the commercial processing of powdered beef extract was reversed by the addition of Celite analytical filter aid. Supplementing beef extract with this silicate is recommended as a modification to the organic flocculation procedure for second-step concentration in monitoring for waterborne viruses. Considerable differences in virus recovery were found among lots of beef extract and Celite preparations; this indicates that the performance of each lot of these substances should be checked before use. PMID:3015024
Method for reduction of selected ion intensities in confined ion beams
Eiden, Gregory C.; Barinaga, Charles J.; Koppenaal, David W.
1998-01-01
A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer.
Method for reduction of selected ion intensities in confined ion beams
Eiden, G.C.; Barinaga, C.J.; Koppenaal, D.W.
1998-06-16
A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer. 7 figs.
NASA Astrophysics Data System (ADS)
Olyazadeh, Roya; van Westen, Cees; Bakker, Wim H.; Aye, Zar Chi; Jaboyedoff, Michel; Derron, Marc-Henri
2014-05-01
Natural hazard risk management requires decision making in several stages. Decision making on alternatives for risk reduction planning starts with an intelligence phase for recognition of the decision problems and identifying the objectives. Development of the alternatives and assigning the variable by decision makers to each alternative are employed to the design phase. Final phase evaluates the optimal choice by comparing the alternatives, defining indicators, assigning a weight to each and ranking them. This process is referred to as Multi-Criteria Decision Making analysis (MCDM), Multi-Criteria Evaluation (MCE) or Multi-Criteria Analysis (MCA). In the framework of the ongoing 7th Framework Program "CHANGES" (2011-2014, Grant Agreement No. 263953) of the European Commission, a Spatial Decision Support System is under development, that has the aim to analyse changes in hydro-meteorological risk and provide support to selecting the best risk reduction alternative. This paper describes the module for Multi-Criteria Decision Making analysis (MCDM) that incorporates monetary and non-monetary criteria in the analysis of the optimal alternative. The MCDM module consists of several components. The first step is to define criteria (or Indicators) which are subdivided into disadvantages (criteria that indicate the difficulty for implementing the risk reduction strategy, also referred to as Costs) and advantages (criteria that indicate the favorability, also referred to as benefits). In the next step the stakeholders can use the developed web-based tool for prioritizing criteria and decision matrix. Public participation plays a role in decision making and this is also planned through the use of a mobile web-version where the general local public can indicate their agreement on the proposed alternatives. The application is being tested through a case study related to risk reduction of a mountainous valley in the Alps affected by flooding. Four alternatives are evaluated in this case study namely: construction of defense structures, relocation, implementation of an early warning system and spatial planning regulations. Some of the criteria are determined partly in other modules of the CHANGES SDSS, such as the costs for implementation, the risk reduction in monetary values, and societal risk. Other criteria, which could be environmental, economic, cultural, perception in nature, are defined by different stakeholders such as local authorities, expert organizations, private sector, and local public. In the next step, the stakeholders weight the importance of the criteria by pairwise comparison and visualize the decision matrix, which is a matrix based on criteria versus alternatives values. Finally alternatives are ranked by Analytic Hierarchy Process (AHP) method. We expect that this approach will help the decision makers to ease their works and reduce their costs, because the process is more transparent, more accurate and involves a group decision. In that way there will be more confidence in the overall decision making process. Keywords: MCDM, Analytic Hierarchy Process (AHP), SDSS, Natural Hazard Risk Management
Dual processing and diagnostic errors.
Norman, Geoff
2009-09-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.
Sputtering of rough surfaces: a 3D simulation study
NASA Astrophysics Data System (ADS)
von Toussaint, U.; Mutzke, A.; Manhard, A.
2017-12-01
The lifetime of plasma-facing components is critical for future magnetic confinement fusion power plants. A key process limiting the lifetime of the first-wall is sputtering by energetic ions. To provide a consistent modeling of the sputtering process of realistic geometries, the SDTrimSP-code has been extended to enable the processing of analytic as well as measured arbitrary 3D surface morphologies. The code has been applied to study the effect of varying the impact angle of ions on rough surfaces on the sputter yield as well as the influence of the aspect ratio of surface structures on the 2D distribution of the local sputtering yields. Depending on the surface morphologies reductions of the effective sputter yields to less than 25% have been observed in the simulation results.
Atlanta congestion reduction demonstration national evaluation plan.
DOT National Transportation Integrated Search
2011-04-01
This report provides an analytic framework for evaluating the Atlanta Congestion Reduction Demonstration (CRD) under the United States Department of Transportation (U.S. DOT) Urban Partnership Agreement (UPA) and CRD Programs. The Atlanta CRD project...
This is the first phase of a potentially multi-phase project aimed at identifying scientific methodologies that will lead to the development of innnovative analytical tools supporting the analysis of control strategy effectiveness, namely. accountabilty. Significant reductions i...
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Reduction of diffusional defocusing in hydrodynamically focused flows
Affleck, Rhett L.; Demas, James N.; Goodwin, Peter M.; Keller, Richard; Wu, Ming
1998-01-01
An analyte fluid stream with first molecules having relatively low molecular weight and a corresponding high coefficient of diffusion has reduced diffusional defocusing out of an analyte fluid stream. The analyte fluid stream of first molecules is associated with second molecules of relatively high molecular weight having a relatively low coefficient of diffusion and a binding constant effective to associate with the first molecules. A focused analyte fluid stream is maintained since the combined molecular weight of the associated first and second molecules is effective to minimize diffusion of the first molecules out of the analyte fluid stream.
Reduction of diffusional defocusing in hydrodynamically focused flows
Affleck, R.L.; Demas, J.N.; Goodwin, P.M.; Keller, R.; Wu, M.
1998-09-01
An analyte fluid stream with first molecules having relatively low molecular weight and a corresponding high coefficient of diffusion has reduced diffusional defocusing out of an analyte fluid stream. The analyte fluid stream of first molecules is associated with second molecules of relatively high molecular weight having a relatively low coefficient of diffusion and a binding constant effective to associate with the first molecules. A focused analyte fluid stream is maintained since the combined molecular weight of the associated first and second molecules is effective to minimize diffusion of the first molecules out of the analyte fluid stream. 6 figs.
Application of stiffened cylinder analysis to ATP interior noise studies
NASA Technical Reports Server (NTRS)
Wilby, E. G.; Wilby, J. F.
1983-01-01
An analytical model developed to predict the interior noise of propeller driven aircraft was applied to experimental configurations for a Fairchild Swearingen Metro II fuselage exposed to simulated propeller excitation. The floor structure of the test fuselage was of unusual construction - mounted on air springs. As a consequence, the analytical model was extended to include a floor treatment transmission coefficient which could be used to describe vibration attenuation through the mounts. Good agreement was obtained between measured and predicted noise reductions when the foor treatment transmission loss was about 20 dB - a value which is consistent with the vibration attenuation provided by the mounts. The analytical model was also adapted to allow the prediction of noise reductions associated with boundary layer excitation as well as propeller and reverberant noise.
Urban Partnership Agreement and Congestion Reduction Demonstration : National Evaluation Framework
DOT National Transportation Integrated Search
2008-11-21
This report provides an analytical framework for evaluating six deployments under the United States Department of Transportation (U.S. DOT) Urban Partnership Agreement (UPA) and Congestion Reduction Demonstration (CRD) Programs. The six UPA/CRD sites...
Karatapanis, Andreas E; Petrakis, Dimitrios E; Stalikas, Constantine D
2012-05-13
Magnetically driven separation techniques have received considerable attention in recent decade because of their great potential application. In this study, we investigate the application of an unmodified layered magnetic Fe/Fe(2)O(3) nanoscavenger for the analytical enrichment and determination of sub-parts per billion concentrations of Cd(II), Pb(II), Ni(II), Cr(VI) and As(V) from water samples. The synthesized nanoscavenger was characterized by BET, TGA, XRD and IR and the parameters influencing the extraction and recovery of the preconcentration process were assessed by atomic absorption spectrometry. The possible mechanism of the enrichment of heavy metals on Fe/Fe(2)O(3) was proposed, which involved the dominant adsorption and reduction. The nanoscale size offers large surface area and high reactivity of sorption and reduction reactions. The obtained limits of detection for the metals studied were in the range of 20-125 ng L(-1) and the applicability of the nanomaterial was verified using a real sample matrix. The method is environmentally friendly as only 15 mg of nanoscavenger are used, no organic solvent is required for the extraction and the experiment is performed without the need for filtration or preparation of packed preconcentration columns. Copyright © 2012 Elsevier B.V. All rights reserved.
The Impact of Big Data on Chronic Disease Management.
Bhardwaj, Niharika; Wodajo, Bezawit; Spano, Anthony; Neal, Symaron; Coustasse, Alberto
Population health management and specifically chronic disease management depend on the ability of providers to prevent development of high-cost and high-risk conditions such as diabetes, heart failure, and chronic respiratory diseases and to control them. The advent of big data analytics has potential to empower health care providers to make timely and truly evidence-based informed decisions to provide more effective and personalized treatment while reducing the costs of this care to patients. The goal of this study was to identify real-world health care applications of big data analytics to determine its effectiveness in both patient outcomes and the relief of financial burdens. The methodology for this study was a literature review utilizing 49 articles. Evidence of big data analytics being largely beneficial in the areas of risk prediction, diagnostic accuracy and patient outcome improvement, hospital readmission reduction, treatment guidance, and cost reduction was noted. Initial applications of big data analytics have proved useful in various phases of chronic disease management and could help reduce the chronic disease burden.
NASA Astrophysics Data System (ADS)
Forbes, Richard G.
2017-03-01
With a large-area field electron emitter, when an individual post-like emitter is sufficiently resistive, and current through it is sufficiently large, then voltage loss occurs along it. This letter provides a simple analytical and conceptual demonstration that this voltage loss is directly and inextricably linked to a reduction in the field enhancement factor (FEF) at the post apex. A formula relating apex-FEF reduction to this voltage loss was obtained in the paper by Minoux et al. [Nano Lett. 5, 2135 (2005)] by fitting to numerical results from a Laplace solver. This letter derives the same formula analytically, by using a "floating sphere" model. The analytical proof brings out the underlying physics more clearly and shows that the effect is a general phenomenon, related to reduction in the magnitude of the surface charge in the most protruding parts of an emitter. Voltage-dependent FEF-reduction is one cause of "saturation" in Fowler-Nordheim (FN) plots. Another is a voltage-divider effect, due to measurement-circuit resistance. An integrated theory of both effects is presented. Both together, or either by itself, can cause saturation. Experimentally, if saturation occurs but voltage loss is small (<20 V, say), then saturation is more probably due to FEF-reduction than voltage division. In this case, existing treatments of electrostatic interaction ("shielding") between closely spaced emitters may need modification. Other putative causes of saturation exist, so the present theory is a partial story. Its extension seems possible and could lead to a more general physical understanding of the causes of FN-plot saturation.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
da Fonseca, E.M.; Neto, J.A. Baptista; McAlister, J.J.; Smith, B.J.; Crapez, M.A.C.
2014-01-01
Processes involving heavy metals and other contaminants continue to present unsolved environmental questions. To advance the understanding of geochemical processes that involve the bioavailability of contaminants, cores where collected in the Rodrigo de Freitas lagoon, and analyzed for bacterial activity and metal concentrations. Results would suggest an extremely reducing environment where organic substances seem to be the predominant agents responsible for this geochemical process. Analytical data showed sulphate reduction to be the main agent driving this process, since this kind of bacteria was found to be active in all of the samples analyzed. Esterase enzyme production did not signal the influence of heavy metals and hydrocarbon concentrations and heavy metals were found to be unavailable for biota. However, correlation between results for bacterial biomass and the potentially mobile percentage of the total Ni concentrations would suggest a negative impact. PMID:25477931
NASA Technical Reports Server (NTRS)
Brand, R. R.; Barker, J. L.
1983-01-01
A multistage sampling procedure using image processing, geographical information systems, and analytical photogrammetry is presented which can be used to guide the collection of representative, high-resolution spectra and discrete reflectance targets for future satellite sensors. The procedure is general and can be adapted to characterize areas as small as minor watersheds and as large as multistate regions. Beginning with a user-determined study area, successive reductions in size and spectral variation are performed using image analysis techniques on data from the Multispectral Scanner, orbital and simulated Thematic Mapper, low altitude photography synchronized with the simulator, and associated digital data. An integrated image-based geographical information system supports processing requirements.
Parker, Richard; Markov, Marko
2015-09-01
This article presents a novel modality for accelerating the repair of tendon and ligament lesions by means of a specifically designed electromagnetic field in an equine model. This novel therapeutic approach employs a delivery system that induces a specific electrical signal from an external magnetic field derived from Superconductive QUantum Interference Device (SQUID) measurements of injured vs. healthy tissue. Evaluation of this therapy technique is enabled by a proposed new technology described as Predictive Analytical Imagery (PAI™). This technique examines an ultrasound grayscale image and seeks to evaluate it by means of look-ahead predictive algorithms and digital signal processing. The net result is a significant reduction in background noise and the production of a high-resolution grayscale or digital image.
Informing Mexico's Distributed Generation Policy with System Advisor Model (SAM) Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aznar, Alexandra Y; Zinaman, Owen R; McCall, James D
The Government of Mexico recognizes the potential for clean distributed generation (DG) to meaningfully contribute to Mexico's clean energy and emissions reduction goals. However, important questions remain about how to fairly value DG and foster inclusive and equitable market growth that is beneficial to investors, electricity ratepayers, electricity distributors, and society. The U.S. National Renewable Energy Laboratory (NREL) has partnered with power sector institutions and stakeholders in Mexico to provide timely analytical support and expertise to help inform policymaking processes on clean DG. This document describes two technical assistance interventions that used the System Advisor Model (SAM) to inform Mexico'smore » DG policymaking processes with a focus on rooftop solar regulation and policy.« less
Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang
2016-10-01
Currently, near infrared spectroscopy (NIRS) has been considered as an efficient tool for achieving process analytical technology(PAT) in the manufacture of traditional Chinese medicine (TCM) products. In this article, the NIRS based process analytical system for the production of salvianolic acid for injection was introduced. The design of the process analytical system was described in detail, including the selection of monitored processes and testing mode, and potential risks that should be avoided. Moreover, the development of relative technologies was also presented, which contained the establishment of the monitoring methods for the elution of polyamide resin and macroporous resin chromatography processes, as well as the rapid analysis method for finished products. Based on author's experience of research and work, several issues in the application of NIRS to the process monitoring and control in TCM production were then raised, and some potential solutions were also discussed. The issues include building the technical team for process analytical system, the design of the process analytical system in the manufacture of TCM products, standardization of the NIRS-based analytical methods, and improving the management of process analytical system. Finally, the prospect for the application of NIRS in the TCM industry was put forward. Copyright© by the Chinese Pharmaceutical Association.
Is Analytic Information Processing a Feature of Expertise in Medicine?
ERIC Educational Resources Information Center
McLaughlin, Kevin; Rikers, Remy M.; Schmidt, Henk G.
2008-01-01
Diagnosing begins by generating an initial diagnostic hypothesis by automatic information processing. Information processing may stop here if the hypothesis is accepted, or analytical processing may be used to refine the hypothesis. This description portrays analytic processing as an optional extra in information processing, leading us to…
NASA Technical Reports Server (NTRS)
Rana, D. S.
1980-01-01
The data reduction capabilities of the current data reduction programs were assessed and a search for a more comprehensive system with higher data analytic capabilities was made. Results of the investigation are presented.
System automatically supplies precise analytical samples of high-pressure gases
NASA Technical Reports Server (NTRS)
Langdon, W. M.
1967-01-01
High-pressure-reducing and flow-stabilization system delivers analytical gas samples from a gas supply. The system employs parallel capillary restrictors for pressure reduction and downstream throttling valves for flow control. It is used in conjunction with a sampling valve and minimizes alterations of the sampled gas.
Mechanistic investigation of Fe(III) oxide reduction by low molecular weight organic sulfur species
NASA Astrophysics Data System (ADS)
Eitel, Eryn M.; Taillefert, Martial
2017-10-01
Low molecular weight organic sulfur species, often referred to as thiols, are known to be ubiquitous in aquatic environments and represent important chemical reductants of Fe(III) oxides. Thiols are excellent electron shuttles used during dissimilatory iron reduction, and in this capacity could indirectly affect the redox state of sediments, release adsorbed contaminants via reductive dissolution, and influence the carbon cycle through alteration of bacterial respiration processes. Interestingly, the reduction of Fe(III) oxides by thiols has not been previously investigated in environmentally relevant conditions, likely due to analytical limitations associated with the detection of thiols and their oxidized products. In this study, a novel electrochemical method was developed to simultaneously determine thiol/disulfide pair concentrations in situ during the reduction of ferrihydrite in batch reactors. First order rate laws with respect to initial thiol concentration were confirmed for Fe(III) oxyhydroxide reduction by four common thiols: cysteine, homocysteine, cysteamine, and glutathione. Zero order was determined for both Fe(III) oxyhydroxide and proton concentration at circumneutral pH. A kinetic model detailing the molecular mechanism of the reaction was optimized with proposed intermediate surface structures. Although metal oxide overall reduction rate constants were inversely proportional to the complexity of the thiol structure, the extent of metal reduction increased with structure complexity, indicating that surface complexes play a significant role in the ability of these thiols to reduce iron. Taken together, these results demonstrate the importance of considering the molecular reaction mechanism at the iron oxide surface when investigating the potential for thiols to act as electron shuttles during dissimilatory iron reduction in natural environments.
Spacelab Mission Implementation Cost Assessment (SMICA)
NASA Technical Reports Server (NTRS)
Guynes, B. V.
1984-01-01
A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.
Implementation of the analytical hierarchy process with VBA in ArcGIS
NASA Astrophysics Data System (ADS)
Marinoni, Oswald
2004-07-01
Decisions on landuse have become progressively more difficult in the last decades. The main reasons for this development lie in the increasing population combined with an increasing demand for new land and resources and in the growing consciousness for sustainable land and resource use. The steady reduction of valuable land leads to an increase of conflicts in land use decision-making processes since more interests are being affected and therefore more stakeholders with different land use interests and different valuation criteria are being involved in the decision-making process. In the course of such a decision process all identified criteria are weighted according to their relative importance. But assigning weights to the relevant criteria quickly becomes a difficult task when a greater number of criteria are being considered, especially with regard to land use decisions where decision makers expect some kind of mapped result it is therefore useful to use procedures that not only help to derive criteria weights but also accelerate the visualisation and mapping of land use assessment results. Both aspects can easily be facilitated in a GIS. This paper focuses the development of an ArcGIS VBA macro which enables the user to derive criteria weights with the analytical hierarchy process and which allows a mapping of the land use assessment results by a weighted summation of GIS raster data sets. A dynamic link library for the calculation of the eigenvalues and eigenvectors of a square matrix is provided.
Benzi, Roberto; Ching, Emily S C; Horesh, Nizan; Procaccia, Itamar
2004-02-20
A simple model of the effect of polymer concentration on the amount of drag reduction in turbulence is presented, simulated, and analyzed. The qualitative phase diagram of drag coefficient versus Reynolds number (Re) is recaptured in this model, including the theoretically elusive onset of drag reduction and the maximum drag reduction (MDR) asymptote. The Re-dependent drag and the MDR are analytically explained, and the dependence of the amount of drag on material parameters is rationalized.
Prado-Silva, Leonardo; Cadavez, Vasco; Gonzales-Barron, Ursula; Rezende, Ana Carolina B.
2015-01-01
The aim of this study was to perform a meta-analysis of the effects of sanitizing treatments of fresh produce on Salmonella spp., Escherichia coli O157:H7, and Listeria monocytogenes. From 55 primary studies found to report on such effects, 40 were selected based on specific criteria, leading to more than 1,000 data on mean log reductions of these three bacterial pathogens impairing the safety of fresh produce. Data were partitioned to build three meta-analytical models that could allow the assessment of differences in mean log reductions among pathogens, fresh produce, and sanitizers. Moderating variables assessed in the meta-analytical models included type of fresh produce, type of sanitizer, concentration, and treatment time and temperature. Further, a proposal was done to classify the sanitizers according to bactericidal efficacy by means of a meta-analytical dendrogram. The results indicated that both time and temperature significantly affected the mean log reductions of the sanitizing treatment (P < 0.0001). In general, sanitizer treatments led to lower mean log reductions when applied to leafy greens (for example, 0.68 log reductions [0.00 to 1.37] achieved in lettuce) compared to other, nonleafy vegetables (for example, 3.04 mean log reductions [2.32 to 3.76] obtained for carrots). Among the pathogens, E. coli O157:H7 was more resistant to ozone (1.6 mean log reductions), while L. monocytogenes and Salmonella presented high resistance to organic acids, such as citric acid, acetic acid, and lactic acid (∼3.0 mean log reductions). With regard to the sanitizers, it has been found that slightly acidic electrolyzed water, acidified sodium chlorite, and the gaseous chlorine dioxide clustered together, indicating that they possessed the strongest bactericidal effect. The results reported seem to be an important achievement for advancing the global understanding of the effectiveness of sanitizers for microbial safety of fresh produce. PMID:26362982
Sarkar, Sahotra
2015-10-01
This paper attempts a critical reappraisal of Nagel's (1961, 1970) model of reduction taking into account both traditional criticisms and recent defenses. This model treats reduction as a type of explanation in which a reduced theory is explained by a reducing theory after their relevant representational items have been suitably connected. In accordance with the deductive-nomological model, the explanation is supposed to consist of a logical deduction. Nagel was a pluralist about both the logical form of the connections between the reduced and reducing theories (which could be conditionals or biconditionals) and their epistemological status (as analytic connections, conventions, or synthetic claims). This paper defends Nagel's pluralism on both counts and, in the process, argues that the multiple realizability objection to reductionism is misplaced. It also argues that the Nagel model correctly characterizes reduction as a type of explanation. However, it notes that logical deduction must be replaced by a broader class of inferential techniques that allow for different types of approximation. Whereas Nagel (1970), in contrast to his earlier position (1961), recognized the relevance of approximation, he did not realize its full import for the model. Throughout the paper two case studies are used to illustrate the arguments: the putative reduction of classical thermodynamics to the kinetic theory of matter and that of classical genetics to molecular biology. Copyright © 2015. Published by Elsevier Ltd.
Peng, Wei; Zhao, Liuwei; Liu, Fengmao; Xue, Jiaying; Li, Huichen; Shi, Kaiwei
2014-01-01
The changes of imidacloprid, pyraclostrobin, azoxystrobin and fipronil residues were studied to investigate the carryover of pesticide residues in winter jujube during paste processing. A multi-residue analytical method for winter jujube was developed based on the QuEChERS approach. The recoveries for the pesticides were between 87.5% and 116.2%. LODs ranged from 0.002 to 0.1 mg kg(-1). The processing factor (Pf) is defined as the ratio of pesticide residue concentration in the paste to that in winter jujube. Pf was higher than 1 for the removal of extra water, and other steps were generally less than 1, indicating that the whole process resulted in lower pesticide residue levels in paste. Peeling would be the critical step for pesticide removal. Processing factors varied among different pesticides studied. The results are useful to address optimisation of the processing techniques in a manner that leads to considerable pesticide residue reduction.
NASA Astrophysics Data System (ADS)
Zahari, Zakirah Mohd; Zubaidah Adnan, Siti; Kanthasamy, Ramesh; Saleh, Suriyati; Samad, Noor Asma Fazli Abdul
2018-03-01
The specification of the crystal product is usually given in terms of crystal size distribution (CSD). To this end, optimal cooling strategy is necessary to achieve the CSD. The direct design control involving analytical CSD estimator is one of the approaches that can be used to generate the set-point. However, the effects of temperature on the crystal growth rate are neglected in the estimator. Thus, the temperature dependence on the crystal growth rate needs to be considered in order to provide an accurate set-point. The objective of this work is to extend the analytical CSD estimator where Arrhenius expression is employed to cover the effects of temperature on the growth rate. The application of this work is demonstrated through a potassium sulphate crystallisation process. Based on specified target CSD, the extended estimator is capable of generating the required set-point where a proposed controller successfully maintained the operation at the set-point to achieve the target CSD. Comparison with other cooling strategies shows a reduction up to 18.2% of the total number of undesirable crystals generated from secondary nucleation using linear cooling strategy is achieved.
Bridging worlds: participatory thinking in Jungian context.
Brown, Robin S
2017-04-01
Introducing the 'participatory' paradigm associated with the work of transpersonalists Richard Tarnas and Jorge Ferrer, the author outlines an approach to Jung's archetypal thinking that might offer a more adequate basis in which to ground a non-reductive approach to practice. In order to demonstrate the relevance of this outlook at the present time, the author begins by examining recent debates concerning the nature of 'truth' in the clinical setting. Reflecting on the difficulties analysts face in attempting to maintain professional authority without falling into an implicit authoritarianism, it is argued that any approach to therapy seeking to orient itself towards 'the unconscious' must posit the challenges of pluralism as a central concern for practice. With reference to the relationship between analytical psychology and the psychoanalytic mainstream, attention is drawn to the theoretical problems raised by the relational commitment to constructivist epistemologies, and a consequent tendency towards biological reductionism. Turning to the Jungian literature, similar tensions are observed at play in the present state of analytical psychology. Drawing attention to the process-oriented qualities of Jung's work, it is suggested that the speculative nature of Jung's psychology offers a more adequate basis for contemporary practice than might be assumed. © 2017, The Society of Analytical Psychology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ElNaggar, Mariam S.; Van Berkel, Gary J.
2011-08-10
The recently discovered sample plug formation and injection operational mode of a continuous flow, coaxial tube geometry, liquid microjunction surface sampling probe (LMJ-SSP) (J. Am. Soc. Mass Spectrom, 2011) was further characterized and applied for concentration and mixing of analyte extracted from multiple areas on a surface and for nanoliter-scale chemical reactions of sampled material. A transparent LMJ-SSP was constructed and colored analytes were used so that the surface sampling process, plug formation, and the chemical reactions could be visually monitored at the sampling end of the probe before being analyzed by mass spectrometry of the injected sample plug. Injectionmore » plug peak widths were consistent for plug hold times as long as the 8 minute maximum attempted (RSD below 1.5%). Furthermore, integrated injection peak signals were not significantly different for the range of hold times investigated. The ability to extract and completely mix individual samples within a fixed volume at the sampling end of the probe was demonstrated and a linear mass spectral response to the number of equivalent analyte spots sampled was observed. Lastly, using the color and mass changing chemical reduction of the redox dye 2,6-dichlorophenol-indophenol with ascorbic acid, the ability to sample, concentrate, and efficiently run reactions within the same plug volume within the probe was demonstrated.« less
Catalytic mechanism in cyclic voltammetry at disc electrodes: an analytical solution.
Molina, Angela; González, Joaquín; Laborda, Eduardo; Wang, Yijun; Compton, Richard G
2011-08-28
The theory of cyclic voltammetry at disc electrodes and microelectrodes is developed for a system where the electroactive reactant is regenerated in solution using a catalyst. This catalytic process is of wide importance, not least in chemical sensing, and it can be characterized by the resulting peak current which is always larger than that of a simple electrochemical reaction; in contrast the reverse peak is always relatively diminished in size. From the theoretical point of view, the problem involves a complex physical situation with two-dimensional mass transport and non-uniform surface gradients. Because of this complexity, hitherto the treatment of this problem has been tackled mainly by means of numerical methods and so no analytical expression was available for the transient response of the catalytic mechanism in cyclic voltammetry when disc electrodes, the most popular practical geometry, are used. In this work, this gap is filled by presenting an analytical solution for the application of any sequence of potential pulses and, in particular, for cyclic voltammetry. The induction principle is applied to demonstrate mathematically that the superposition principle applies whatever the geometry of the electrode, which enabled us to obtain an analytical equation valid whatever the electrode size and the kinetics of the catalytic reaction. The theoretical results obtained are applied to the experimental study of the electrocatalytic Fenton reaction, determining the rate constant of the reduction of hydrogen peroxide by iron(II).
Optical bandgap of semiconductor nanostructures: Methods for experimental data analysis
NASA Astrophysics Data System (ADS)
Raciti, R.; Bahariqushchi, R.; Summonte, C.; Aydinli, A.; Terrasi, A.; Mirabella, S.
2017-06-01
Determination of the optical bandgap (Eg) in semiconductor nanostructures is a key issue in understanding the extent of quantum confinement effects (QCE) on electronic properties and it usually involves some analytical approximation in experimental data reduction and modeling of the light absorption processes. Here, we compare some of the analytical procedures frequently used to evaluate the optical bandgap from reflectance (R) and transmittance (T) spectra. Ge quantum wells and quantum dots embedded in SiO2 were produced by plasma enhanced chemical vapor deposition, and light absorption was characterized by UV-Vis/NIR spectrophotometry. R&T elaboration to extract the absorption spectra was conducted by two approximated methods (single or double pass approximation, single pass analysis, and double pass analysis, respectively) followed by Eg evaluation through linear fit of Tauc or Cody plots. Direct fitting of R&T spectra through a Tauc-Lorentz oscillator model is used as comparison. Methods and data are discussed also in terms of the light absorption process in the presence of QCE. The reported data show that, despite the approximation, the DPA approach joined with Tauc plot gives reliable results, with clear advantages in terms of computational efforts and understanding of QCE.
Zhang, Ruibin; Qian, Xin; Zhu, Wenting; Gao, Hailong; Hu, Wei; Wang, Jinhua
2014-09-09
In the beginning of the 21st century, the deterioration of water quality in Taihu Lake, China, has caused widespread concern. The primary source of pollution in Taihu Lake is river inflows. Effective pollution load reduction scenarios need to be implemented in these rivers in order to improve the water quality of Taihu Lake. It is important to select appropriate pollution load reduction scenarios for achieving particular goals. The aim of this study was to facilitate the selection of appropriate scenarios. The QUAL2K model for river water quality was used to simulate the effects of a range of pollution load reduction scenarios in the Wujin River, which is one of the major inflow rivers of Taihu Lake. The model was calibrated for the year 2010 and validated for the year 2011. Various pollution load reduction scenarios were assessed using an analytic hierarchy process, and increasing rates of evaluation indicators were predicted using the Delphi method. The results showed that control of pollution from the source is the optimal method for pollution prevention and control, and the method of "Treatment after Pollution" has bad environmental, social and ecological effects. The method applied in this study can assist for environmental managers to select suitable pollution load reduction scenarios for achieving various objectives.
Zhang, Ruibin; Qian, Xin; Zhu, Wenting; Gao, Hailong; Hu, Wei; Wang, Jinhua
2014-01-01
In the beginning of the 21st century, the deterioration of water quality in Taihu Lake, China, has caused widespread concern. The primary source of pollution in Taihu Lake is river inflows. Effective pollution load reduction scenarios need to be implemented in these rivers in order to improve the water quality of Taihu Lake. It is important to select appropriate pollution load reduction scenarios for achieving particular goals. The aim of this study was to facilitate the selection of appropriate scenarios. The QUAL2K model for river water quality was used to simulate the effects of a range of pollution load reduction scenarios in the Wujin River, which is one of the major inflow rivers of Taihu Lake. The model was calibrated for the year 2010 and validated for the year 2011. Various pollution load reduction scenarios were assessed using an analytic hierarchy process, and increasing rates of evaluation indicators were predicted using the Delphi method. The results showed that control of pollution from the source is the optimal method for pollution prevention and control, and the method of “Treatment after Pollution” has bad environmental, social and ecological effects. The method applied in this study can assist for environmental managers to select suitable pollution load reduction scenarios for achieving various objectives. PMID:25207492
Orlowska, Ewelina; Roller, Alexander; Pignitter, Marc; Jirsa, Franz; Krachler, Regina; Kandioller, Wolfgang; Keppler, Bernhard K
2017-01-15
A series of monomeric and dimeric Fe III complexes with O,O-; O,N-; O,S-coordination motifs has been prepared and characterized by standard analytical methods in order to elucidate their potential to act as model compounds for aquatic humic acids. Due to the postulated reduction of iron in humic acids and following uptake by microorganisms, the redox behavior of the models was investigated with cyclic voltammetry. Most of the investigated compounds showed iron reduction potentials accessible to biological reducing agents. Additionally, observed reduction processes were predominantly irreversible, suggesting that subsequent reactions can take place after reduction of the iron center. Also the stability of the synthesized complexes in pure water and artificial seawater was monitored from 24h up to 21days by means of UV-Vis spectrometry. Several complexes remained stable even after 21days, showing only partially precipitation but some of them showed changes in UV-Vis spectra already after 24h which were connected to protonation/deprotonation processes as well as redox processes and degradation of the complexes. The ability to act as an iron source for primary producers was tested in algal growth experiments with two marine algae species Chlorella salina and Prymnesium parvum. Some of the compounds showed effects on the algal cultures, which are comparable with natural humic acids and better as for the samples kept under ideal conditions. Those findings help to understand which functional groups of humic acids could be responsible for the reversible iron binding and transport in aquatic humic substances. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Apparatus for reduction of selected ion intensities in confined ion beams
Eiden, Gregory C.; Barinaga, Charles J.; Koppenaal, David W.
2001-01-01
An apparatus for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the apparatus has an ion trap or a collision cell containing a reagent gas wherein the reagent gas accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the collision cell as employed in various locations within analytical instruments including an inductively coupled plasma mass spectrometer.
NASA Astrophysics Data System (ADS)
Fisher, Mark E.; la Grone, Marcus; Sikes, John
2003-09-01
A sensor (known as Fido) that utilizes amplification of fluorescence quenching as the transduction mechanism for ultra-trace detection of nitroaromatic compounds associated with landmines has been described previously. Previous sensor prototypes utilized a single band of amplifying polymer deployed inside a capillary waveguide to form the sensing element of the detector. A new prototype has been developed that incorporates multiple, discrete bands of different amplifying polymers deployed in a linear array inside the capillary. Vapor-phase samples are introduced into the sensor as a sharp pulse via a gated inlet. As the vapor pulse is swept through the capillary by flow of a carrier gas, the pulse of analyte encounters the bands of polymer sequentially. If the sample contains nitroaromatic explosives, the bands of polymer will respond with a reduction in emission intensity proportional to the mass of analyte in the sample. Because the polymer bands are deployed serially, the analyte pulse does not reach the bands of polymer simultaneously. Hence, a temporal response pattern will be observed as the analyte pulse traverses the length of the capillary. In addition, the intensity of response for each band will vary, producing a ratiometric response. The temporal and ratiometric responses are characteristic of a given analyte, enhancing discrimination of target analytes from potential interferents. This should translate into a reduction in sensor false alarm rates.
Miller, Tyler M; Geraci, Lisa
2016-05-01
People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendron, R.; Engebrecht, C.
The House Simulation Protocol document was developed to track and manage progress toward Building America's multi-year, average whole-building energy reduction research goals for new construction and existing homes, using a consistent analytical reference point. This report summarizes the guidelines for developing and reporting these analytical results in a consistent and meaningful manner for all home energy uses using standard operating conditions.
Size-tailored synthesis of silver quasi-nanospheres by kinetically controlled seeded growth.
Liu, Xiaxia; Yin, Yadong; Gao, Chuanbo
2013-08-20
This paper describes a simple and convenient procedure to synthesize monodisperse silver (Ag) quasi-nanospheres with size tunable in a range of 19-140 nm through a one-step seeded growth strategy. Acetonitrile was employed as a coordinating ligand of a Ag(I) salt in order to achieve a low concentration of elemental Ag after reduction and thus suppression of new nucleation events. Since the addition of the seeds significantly accelerates the reduction reaction of Ag(I) by ascorbic acid, the reaction kinetics was further delicately balanced by tuning the reaction temperature, which proved to be critical in producing Ag quasi-nanospheres with uniform size and shape. This synthesis is highly scalable, so that it provides a simple yet very robust process for producing Ag quasi-nanospheres for many biological, analytical, and catalytic applications which often demand samples in large quantity and widely tunable particle sizes.
NASA Astrophysics Data System (ADS)
Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.
2017-08-01
This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.
Model-order reduction of lumped parameter systems via fractional calculus
NASA Astrophysics Data System (ADS)
Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio
2018-04-01
This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, T.E.
1996-01-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less
Corporate Delivery of a Global Smart Buildings Program
Fernandes, Samuel; Granderson, Jessica; Singla, Rupam; ...
2017-11-22
Buildings account for about 40 percent of the total energy consumption in the U.S. and emit approximately one third of greenhouse gas emissions. But they also offer tremendous potential for achieving significant greenhouse gas reductions with the right savings strategies. With an increasing amount of data from buildings and advanced computational and analytical abilities, buildings can be made “smart” to optimize energy consumption and occupant comfort. Smart buildings are often characterized as having a high degree of data and system integration, connectivity and control, as well as the advanced use of data analytics. These “smarts” can enable up to 10–20%more » savings in a building, and help ensure that they persist over time. In 2009, Microsoft Corporation launched the Energy-Smart Buildings (ESB) program with a vision to improve building operations services, security and accessibility in services, and new tenant applications and services that improve productivity and optimize energy use. The ESB program focused on fault diagnostics, advanced analytics and new organizational processes and practices to support their operational integration. In addition to the ESB program, Microsoft undertook capital improvement projects that made effective use of a utility incentive program and lab consolidations over the same duration. The ESB program began with a pilot at Microsoft's Puget Sound campus that identified significant savings of up to 6–10% in the 13 pilot buildings. The success of the pilot led to a global deployment of the program. Between 2009 and 2015, there was a 23.7% reduction in annual electricity consumption (kWh) at the Puget Sound campus with 18.5% of that resulting from the ESB and lab consolidations. This article provides the results of research conducted to assess the best-practice strategies that Microsoft implemented to achieve these savings, including the fault diagnostic routines that are the foundation of the ESB program and organizational change management practices. It also presents the process that was adopted to scale the ESB program globally. We conclude with recommendations for how these successes can be generalized and replicated by other corporate enterprises.« less
Corporate Delivery of a Global Smart Buildings Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandes, Samuel; Granderson, Jessica; Singla, Rupam
Buildings account for about 40 percent of the total energy consumption in the U.S. and emit approximately one third of greenhouse gas emissions. But they also offer tremendous potential for achieving significant greenhouse gas reductions with the right savings strategies. With an increasing amount of data from buildings and advanced computational and analytical abilities, buildings can be made “smart” to optimize energy consumption and occupant comfort. Smart buildings are often characterized as having a high degree of data and system integration, connectivity and control, as well as the advanced use of data analytics. These “smarts” can enable up to 10–20%more » savings in a building, and help ensure that they persist over time. In 2009, Microsoft Corporation launched the Energy-Smart Buildings (ESB) program with a vision to improve building operations services, security and accessibility in services, and new tenant applications and services that improve productivity and optimize energy use. The ESB program focused on fault diagnostics, advanced analytics and new organizational processes and practices to support their operational integration. In addition to the ESB program, Microsoft undertook capital improvement projects that made effective use of a utility incentive program and lab consolidations over the same duration. The ESB program began with a pilot at Microsoft's Puget Sound campus that identified significant savings of up to 6–10% in the 13 pilot buildings. The success of the pilot led to a global deployment of the program. Between 2009 and 2015, there was a 23.7% reduction in annual electricity consumption (kWh) at the Puget Sound campus with 18.5% of that resulting from the ESB and lab consolidations. This article provides the results of research conducted to assess the best-practice strategies that Microsoft implemented to achieve these savings, including the fault diagnostic routines that are the foundation of the ESB program and organizational change management practices. It also presents the process that was adopted to scale the ESB program globally. We conclude with recommendations for how these successes can be generalized and replicated by other corporate enterprises.« less
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
Biogeochemistry of heavy metals in contaminated excessively moistened soils (Analytical review)
NASA Astrophysics Data System (ADS)
Vodyanitskii, Yu. N.; Plekhanova, I. O.
2014-03-01
The biogeochemical behavior of heavy metals in contaminated excessively moistened soils depends on the development of reducing conditions (either moderate or strong). Upon the moderate biogenic reduction, Cr as the metal with variable valence forms low-soluble compounds, which decreases its availability to plants and prevents its penetration into surface- and groundwater. Creation of artificial barriers for Cr fixation on contaminated sites is based on the stimulation of natural metal-reducing bacteria. Arsenic, being a metalloid with a variable valence, is mobilized upon the moderate biogenic reduction. The mobility of siderophilic heavy metals with a constant valence grows under the moderate reducing conditions at the expense of dissolution of iron (hydr)oxides as carriers of these metals. Zinc, which can enter the newly formed goethite lattice, is an exception. Strong reduction processes in organic excessively moist and flooded soils (usually enriched in S) lead to the formation of low-soluble sulfides of heavy elements with both variable (As) and constant (Cu, Ni, Zn, and Pb) valence. On changing aquatic regime in overmoistened soils and their drying, sulfides of heavy metals are oxidized, and previously fixed metals are mobilized.
Titaley, Ivan A; Ogba, O Maduka; Chibwe, Leah; Hoh, Eunha; Cheong, Paul H-Y; Simonich, Staci L Massey
2018-03-16
Non-targeted analysis of environmental samples, using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC/ToF-MS), poses significant data analysis challenges due to the large number of possible analytes. Non-targeted data analysis of complex mixtures is prone to human bias and is laborious, particularly for comparative environmental samples such as contaminated soil pre- and post-bioremediation. To address this research bottleneck, we developed OCTpy, a Python™ script that acts as a data reduction filter to automate GC × GC/ToF-MS data analysis from LECO ® ChromaTOF ® software and facilitates selection of analytes of interest based on peak area comparison between comparative samples. We used data from polycyclic aromatic hydrocarbon (PAH) contaminated soil, pre- and post-bioremediation, to assess the effectiveness of OCTpy in facilitating the selection of analytes that have formed or degraded following treatment. Using datasets from the soil extracts pre- and post-bioremediation, OCTpy selected, on average, 18% of the initial suggested analytes generated by the LECO ® ChromaTOF ® software Statistical Compare feature. Based on this list, 63-100% of the candidate analytes identified by a highly trained individual were also selected by OCTpy. This process was accomplished in several minutes per sample, whereas manual data analysis took several hours per sample. OCTpy automates the analysis of complex mixtures of comparative samples, reduces the potential for human error during heavy data handling and decreases data analysis time by at least tenfold. Copyright © 2018 Elsevier B.V. All rights reserved.
Wing download reduction using vortex trapping plates
NASA Technical Reports Server (NTRS)
Light, Jeffrey S.; Stremel, Paul M.; Bilanin, Alan J.
1994-01-01
A download reduction technique using spanwise plates on the upper and lower wing surfaces has been examined. Experimental and analytical techniques were used to determine the download reduction obtained using this technique. Simple two-dimensional wind tunnel testing confirmed the validity of the technique for reducing two-dimensional airfoil drag. Computations using a two-dimensional Navier-Stokes analysis provided insight into the mechanism causing the drag reduction. Finally, the download reduction technique was tested using a rotor and wing to determine the benefits for a semispan configuration representative of a tilt rotor aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naegle, John H.; Suppona, Roger A.; Aimone, James Bradley
In 2016, Lewis Rhodes Labs, (LRL), shipped the first commercially viable Neuromorphic Processing Unit, (NPU), branded as a Neuromorphic Data Microscope (NDM). This product leverages architectural mechanisms derived from the sensory cortex of the human brain to efficiently implement pattern matching. LRL and Sandia National Labs have optimized this product for streaming analytics, and demonstrated a 1,000x power per operation reduction in an FPGA format. When reduced to an ASIC, the efficiency will improve to 1,000,000x. Additionally, the neuromorphic nature of the device gives it powerful computational attributes that are counterintuitive to those schooled in traditional von Neumann architectures. Themore » Neuromorphic Data Microscope is the first of a broad class of brain-inspired, time domain processors that will profoundly alter the functionality and economics of data processing.« less
RUPTURES IN THE ANALYTIC SETTING AND DISTURBANCES IN THE TRANSFORMATIONAL FIELD OF DREAMS.
Brown, Lawrence J
2015-10-01
This paper explores some implications of Bleger's (1967, 2013) concept of the analytic situation, which he views as comprising the analytic setting and the analytic process. The author discusses Bleger's idea of the analytic setting as the depositary for projected painful aspects in either the analyst or patient or both-affects that are then rendered as nonprocess. In contrast, the contents of the analytic process are subject to an incessant process of transformation (Green 2005). The author goes on to enumerate various components of the analytic setting: the nonhuman, object relational, and the analyst's "person" (including mental functioning). An extended clinical vignette is offered as an illustration. © 2015 The Psychoanalytic Quarterly, Inc.
SERS-active silver nanoparticle aggregates produced in high-iron float glass by ion exchange process
NASA Astrophysics Data System (ADS)
Karvonen, L.; Chen, Y.; Säynätjoki, A.; Taiviola, K.; Tervonen, A.; Honkanen, S.
2011-11-01
Silver nanoparticles were produced in iron containing float glasses by silver-sodium ion exchange and post-annealing. In particular, the effect of the concentration and the oxidation state of iron in the host glass on the nanoparticle formation was studied. After the nanoparticle fabrication process, the samples were characterized by optical absorption measurements. The samples were etched to expose nanoparticle aggregates on the surface, which were studied by optical microscopy and scanning electron microscopy. The SERS-activity of these glass samples was demonstrated and compared using a dye molecule Rhodamine 6G (R6G) as an analyte. The importance of the iron oxidation level for reduction process is discussed. The glass with high concentration of Fe 2+ ions was found to be superior in SERS applications of silver nanoparticles. The optimal surface features in terms of SERS enhancement are also discussed.
NASA Astrophysics Data System (ADS)
Walaszek, Damian; Senn, Marianne; Wichser, Adrian; Faller, Markus; Wagner, Barbara; Bulska, Ewa; Ulrich, Andrea
2014-09-01
This work describes an evaluation of a strategy for multi-elemental analysis of typical ancient bronzes (copper, lead bronze and tin bronze) by means of laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS).The samples originating from archeological experiments on ancient metal smelting processes using direct reduction in a ‘bloomery’ furnace as well as historical casting techniques were investigated with the use of the previously proposed analytical procedure, including metallurgical observation and preliminary visual estimation of the homogeneity of the samples. The results of LA-ICPMS analysis were compared to the results of bulk composition obtained by X-ray fluorescence spectrometry (XRF) and by inductively coupled plasma mass spectrometry (ICPMS) after acid digestion. These results were coherent for most of the elements confirming the usefulness of the proposed analytical procedure, however the reliability of the quantitative information about the content of the most heterogeneously distributed elements was also discussed in more detail.
Application of wavelet packet transform to compressing Raman spectra data
NASA Astrophysics Data System (ADS)
Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai
2008-12-01
Abstract The Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. Every substance has its own feature Raman spectroscopy, which can analyze the structure, components, concentrations and some other properties of samples easily. RS is a powerful analytical tool for detection and identification. There are many databases of RS. But the data of Raman spectrum needs large space to storing and long time to searching. In this paper, Wavelet packet is chosen to compress Raman spectra data of some benzene series. The obtained results show that the energy retained is as high as 99.9% after compression, while the percentage for number of zeros is 87.50%. It was concluded that the Wavelet packet has significance in compressing the RS data.
Text Mining in Organizational Research
Kobayashi, Vladimer B.; Berkers, Hannah A.; Kismihók, Gábor; Den Hartog, Deanne N.
2017-01-01
Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies. PMID:29881248
Text Mining in Organizational Research.
Kobayashi, Vladimer B; Mol, Stefan T; Berkers, Hannah A; Kismihók, Gábor; Den Hartog, Deanne N
2018-07-01
Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, J.P.
The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.
B-52 control configured vehicles: Flight test results
NASA Technical Reports Server (NTRS)
Arnold, J. I.; Murphy, F. B.
1976-01-01
Recently completed B-52 Control Configured Vehicles (CCV) flight testing is summarized, and results are compared to analytical predictions. Results are presented for five CCV system concepts: ride control, maneuver load control, flutter mode control, augmented stability, and fatigue reduction. Test results confirm analytical predictions and show that CCV system concepts achieve performance goals when operated individually or collectively.
Analytic thinking promotes religious disbelief.
Gervais, Will M; Norenzayan, Ara
2012-04-27
Scientific interest in the cognitive underpinnings of religious belief has grown in recent years. However, to date, little experimental research has focused on the cognitive processes that may promote religious disbelief. The present studies apply a dual-process model of cognitive processing to this problem, testing the hypothesis that analytic processing promotes religious disbelief. Individual differences in the tendency to analytically override initially flawed intuitions in reasoning were associated with increased religious disbelief. Four additional experiments provided evidence of causation, as subtle manipulations known to trigger analytic processing also encouraged religious disbelief. Combined, these studies indicate that analytic processing is one factor (presumably among several) that promotes religious disbelief. Although these findings do not speak directly to conversations about the inherent rationality, value, or truth of religious beliefs, they illuminate one cognitive factor that may influence such discussions.
NASA Astrophysics Data System (ADS)
Nenashev, A. V.; Koshkarev, A. A.; Dvurechenskii, A. V.
2018-03-01
We suggest an approach to the analytical calculation of the strain distribution due to an inclusion in elastically anisotropic media for the case of cubic anisotropy. The idea consists in the approximate reduction of the anisotropic problem to a (simpler) isotropic problem. This gives, for typical semiconductors, an improvement in accuracy by an order of magnitude, compared to the isotropic approximation. Our method allows using, in the case of elastically anisotropic media, analytical solutions obtained for isotropic media only, such as analytical formulas for the strain due to polyhedral inclusions. The present work substantially extends the applicability of analytical results, making them more suitable for describing real systems, such as epitaxial quantum dots.
The analyst's participation in the analytic process.
Levine, H B
1994-08-01
The analyst's moment-to-moment participation in the analytic process is inevitably and simultaneously determined by at least three sets of considerations. These are: (1) the application of proper analytic technique; (2) the analyst's personally-motivated responses to the patient and/or the analysis; (3) the analyst's use of him or herself to actualise, via fantasy, feeling or action, some aspect of the patient's conflicts, fantasies or internal object relationships. This formulation has relevance to our view of actualisation and enactment in the analytic process and to our understanding of a series of related issues that are fundamental to our theory of technique. These include the dialectical relationships that exist between insight and action, interpretation and suggestion, empathy and countertransference, and abstinence and gratification. In raising these issues, I do not seek to encourage or endorse wild analysis, the attempt to supply patients with 'corrective emotional experiences' or a rationalisation for acting out one's countertransferences. Rather, it is my hope that if we can better appreciate and describe these important dimensions of the analytic encounter, we can be better prepared to recognise, understand and interpret the continual streams of actualisation and enactment that are embedded in the analytic process. A deeper appreciation of the nature of the analyst's participation in the analytic process and the dimensions of the analytic process to which that participation gives rise may offer us a limited, although important, safeguard against analytic impasse.
Application of Strategic Planning Process with Fleet Level Analysis Methods
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Pfaender, Holger; Jimenez, Hernando; Garcia, Elena; Feron, Eric; Bernardo, Jose
2016-01-01
The goal of this work is to quantify and characterize the potential system-wide reduction of fuel consumption and corresponding CO2 emissions, resulting from the introduction of N+2 aircraft technologies and concepts into the fleet. Although NASA goals for this timeframe are referenced against a large twin aisle aircraft we consider their application across all vehicle classes of the commercial aircraft fleet, from regional jets to very large aircraft. In this work the authors describe and discuss the formulation and implementation of the fleet assessment by addressing the main analytical components: forecasting, operations allocation, fleet retirement, fleet replacement, and environmental performance modeling.
Chemical fractionation-enhanced structural characterization of marine dissolved organic matter
NASA Astrophysics Data System (ADS)
Arakawa, N.; Aluwihare, L.
2016-02-01
Describing the molecular fingerprint of dissolved organic matter (DOM) requires sample processing methods and separation techniques that can adequately minimize its complexity. We have employed acid hydrolysis as a way to make the subcomponents of marine solid phase-extracted (PPL) DOM more accessible to analytical techniques. Using a combination of NMR and chemical derivatization or reduction analyzed by comprehensive (GCxGC) gas chromatography, we observed chemical features strikingly similar to terrestrial DOM. In particular, we observed reduced alicylic hydrocarbons believed to be the backbone of previously identified carboxylic rich alicyclic material (CRAM). Additionally, we found carbohydrates, amino acids and small lipids and acids.
Students' science process skill and analytical thinking ability in chemistry learning
NASA Astrophysics Data System (ADS)
Irwanto, Rohaeti, Eli; Widjajanti, Endang; Suyanta
2017-08-01
Science process skill and analytical thinking ability are needed in chemistry learning in 21st century. Analytical thinking is related with science process skill which is used by students to solve complex and unstructured problems. Thus, this research aims to determine science process skill and analytical thinking ability of senior high school students in chemistry learning. The research was conducted in Tiga Maret Yogyakarta Senior High School, Indonesia, at the middle of the first semester of academic year 2015/2016 is using the survey method. The survey involved 21 grade XI students as participants. Students were given a set of test questions consists of 15 essay questions. The result indicated that the science process skill and analytical thinking ability were relatively low ie. 30.67%. Therefore, teachers need to improve the students' cognitive and psychomotor domains effectively in learning process.
Analytic integration of real-virtual counterterms in NNLO jet cross sections I
NASA Astrophysics Data System (ADS)
Aglietti, Ugo; Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Trócsányi, Zoltán
2008-09-01
We present analytic evaluations of some integrals needed to give explicitly the integrated real-virtual counterterms, based on a recently proposed subtraction scheme for next-to-next-to-leading order (NNLO) jet cross sections. After an algebraic reduction of the integrals, integration-by-parts identities are used for the reduction to master integrals and for the computation of the master integrals themselves by means of differential equations. The results are written in terms of one- and two-dimensional harmonic polylogarithms, once an extension of the standard basis is made. We expect that the techniques described here will be useful in computing other integrals emerging in calculations in perturbative quantum field theories.
Building America House Simulation Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendron, Robert; Engebrecht, Cheryn
2010-09-01
The House Simulation Protocol document was developed to track and manage progress toward Building America's multi-year, average whole-building energy reduction research goals for new construction and existing homes, using a consistent analytical reference point. This report summarizes the guidelines for developing and reporting these analytical results in a consistent and meaningful manner for all home energy uses using standard operating conditions.
An Analytical Hierarchy Process Model for the Evaluation of College Experimental Teaching Quality
ERIC Educational Resources Information Center
Yin, Qingli
2013-01-01
Taking into account the characteristics of college experimental teaching, through investigaton and analysis, evaluation indices and an Analytical Hierarchy Process (AHP) model of experimental teaching quality have been established following the analytical hierarchy process method, and the evaluation indices have been given reasonable weights. An…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.
We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less
Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.; ...
2016-02-01
We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
2017-08-01
of metallic additive manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics...manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics methods will accelerate that...geometries, we develop a methodology that couples experimental data and modelling to convert the scan paths into spatially resolved local thermal histories
Kokis, Judite V; Macpherson, Robyn; Toplak, Maggie E; West, Richard F; Stanovich, Keith E
2002-09-01
Developmental and individual differences in the tendency to favor analytic responses over heuristic responses were examined in children of two different ages (10- and 11-year-olds versus 13-year-olds), and of widely varying cognitive ability. Three tasks were examined that all required analytic processing to override heuristic processing: inductive reasoning, deductive reasoning under conditions of belief bias, and probabilistic reasoning. Significant increases in analytic responding with development were observed on the first two tasks. Cognitive ability was associated with analytic responding on all three tasks. Cognitive style measures such as actively open-minded thinking and need for cognition explained variance in analytic responding on the tasks after variance shared with cognitive ability had been controlled. The implications for dual-process theories of cognition and cognitive development are discussed.
De Neys, Wim
2006-06-01
Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D; Goodall, John R
2014-01-01
Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit processmore » recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.« less
Reduction of interior sound fields in flexible cylinders by active vibration control
NASA Technical Reports Server (NTRS)
Jones, J. D.; Fuller, C. R.
1988-01-01
The mechanisms of interior sound reduction through active control of a thin flexible shell's vibrational response are presently evaluated in view of an analytical model. The noise source is a single exterior acoustic monopole. The active control model is evaluated for harmonic excitation; the results obtained indicate spatially-averaged noise reductions in excess of 20 dB over the source plane, for acoustic resonant conditions inside the cavity.
Technology-assisted psychoanalysis.
Scharff, Jill Savege
2013-06-01
Teleanalysis-remote psychoanalysis by telephone, voice over internet protocol (VoIP), or videoteleconference (VTC)-has been thought of as a distortion of the frame that cannot support authentic analytic process. Yet it can augment continuity, permit optimum frequency of analytic sessions for in-depth analytic work, and enable outreach to analysands in areas far from specialized psychoanalytic centers. Theoretical arguments against teleanalysis are presented and countered and its advantages and disadvantages discussed. Vignettes of analytic process from teleanalytic sessions are presented, and indications, contraindications, and ethical concerns are addressed. The aim is to provide material from which to judge the authenticity of analytic process supported by technology.
NASA Astrophysics Data System (ADS)
Samborski, Sylwester; Valvo, Paolo S.
2018-01-01
The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.
McElearney, Kyle; Ali, Amr; Gilbert, Alan; Kshirsagar, Rashmi; Zang, Li
2016-01-01
Chemically defined media have been widely used in the biopharmaceutical industry to enhance cell culture productivities and ensure process robustness. These media, which are quite complex, often contain a mixture of many components such as vitamins, amino acids, metals and other chemicals. Some of these components are known to be sensitive to various stress factors including photodegradation. Previous work has shown that small changes in impurity concentrations induced by these potential stresses can have a large impact on the cell culture process including growth and product quality attributes. Furthermore, it has been shown to be difficult to detect these modifications analytically due to the complexity of the cell culture media and the trace level of the degradant products. Here, we describe work performed to identify the specific chemical(s) in photodegraded medium that affect cell culture performance. First, we developed a model system capable of detecting changes in cell culture performance. Second, we used these data and applied an LC-MS analytical technique to characterize the cell culture media and identify degradant products which affect cell culture performance. Riboflavin limitation and N-formylkynurenine (NFK), a tryptophan oxidation catabolite, were identified as chemicals which results in a reduction in cell culture performance. © 2015 American Institute of Chemical Engineers.
Object-Oriented MDAO Tool with Aeroservoelastic Model Tuning Capability
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley; Lung, Shun-fat
2008-01-01
An object-oriented multi-disciplinary analysis and optimization (MDAO) tool has been developed at the NASA Dryden Flight Research Center to automate the design and analysis process and leverage existing commercial as well as in-house codes to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic and hypersonic aircraft. Once the structural analysis discipline is finalized and integrated completely into the MDAO process, other disciplines such as aerodynamics and flight controls will be integrated as well. Simple and efficient model tuning capabilities based on optimization problem are successfully integrated with the MDAO tool. More synchronized all phases of experimental testing (ground and flight), analytical model updating, high-fidelity simulations for model validation, and integrated design may result in reduction of uncertainties in the aeroservoelastic model and increase the flight safety.
Thin Film Electrodes for Rare Event Detectors
NASA Astrophysics Data System (ADS)
Odgers, Kelly; Brown, Ethan; Lewis, Kim; Giordano, Mike; Freedberg, Jennifer
2017-01-01
In detectors for rare physics processes, such as neutrinoless double beta decay and dark matter, high sensitivity requires careful reduction of backgrounds due to radioimpurities in detector components. Ultra pure cylindrical resistors are being created through thin film depositions onto high purity substrates, such as quartz glass or sapphire. By using ultra clean materials and depositing very small quantities in the films, low radioactivity electrodes are produced. A new characterization process for cylindrical film resistors has been developed through analytic construction of an analogue to the Van Der Pauw technique commonly used for determining sheet resistance on a planar sample. This technique has been used to characterize high purity cylindrical resistors ranging from several ohms to several tera-ohms for applications in rare event detectors. The technique and results of cylindrical thin film resistor characterization will be presented.
Intersubjectivity and the creation of meaning in the analytic process.
Maier, Christian
2014-11-01
By means of a clinical illustration, the author describes how the intersubjective exchanges involved in an analytic process facilitate the representation of affects and memories which have been buried in the unconscious or indeed have never been available to consciousness. As a result of projective identificatory processes in the analytic relationship, in this example the analyst falls into a situation of helplessness which connects with his own traumatic experiences. Then he gets into a formal regression of the ego and responds with a so-to-speak hallucinatory reaction-an internal image which enables him to keep the analytic process on track and, later on, to construct an early traumatic experience of the analysand. © 2014, The Society of Analytical Psychology.
NASA Technical Reports Server (NTRS)
Stoner, Mary Cecilia; Hehir, Austin R.; Ivanco, Marie L.; Domack, Marcia S.
2016-01-01
This cost-benefit analysis assesses the benefits of the Advanced Near Net Shape Technology (ANNST) manufacturing process for fabricating integrally stiffened cylinders. These preliminary, rough order-of-magnitude results report a 46 to 58 percent reduction in production costs and a 7-percent reduction in weight over the conventional metallic manufacturing technique used in this study for comparison. Production cost savings of 35 to 58 percent were reported over the composite manufacturing technique used in this study for comparison; however, the ANNST concept was heavier. In this study, the predicted return on investment of equipment required for the ANNST method was ten cryogenic tank barrels when compared with conventional metallic manufacturing. The ANNST method was compared with the conventional multi-piece metallic construction and composite processes for fabricating integrally stiffened cylinders. A case study compared these three alternatives for manufacturing a cylinder of specified geometry, with particular focus placed on production costs and process complexity, with cost analyses performed by the analogy and parametric methods. Furthermore, a scalability study was conducted for three tank diameters to assess the highest potential payoff of the ANNST process for manufacture of large-diameter cryogenic tanks. The analytical hierarchy process (AHP) was subsequently used with a group of selected subject matter experts to assess the value of the various benefits achieved by the ANNST method for potential stakeholders. The AHP study results revealed that decreased final cylinder mass and quality assurance were the most valued benefits of cylinder manufacturing methods, therefore emphasizing the relevance of the benefits achieved with the ANNST process for future projects.
Lismont, Jasmien; Janssens, Anne-Sophie; Odnoletkova, Irina; Vanden Broucke, Seppe; Caron, Filip; Vanthienen, Jan
2016-10-01
The aim of this study is to guide healthcare instances in applying process analytics on healthcare processes. Process analytics techniques can offer new insights in patient pathways, workflow processes, adherence to medical guidelines and compliance with clinical pathways, but also bring along specific challenges which will be examined and addressed in this paper. The following methodology is proposed: log preparation, log inspection, abstraction and selection, clustering, process mining, and validation. It was applied on a case study in the type 2 diabetes mellitus domain. Several data pre-processing steps are applied and clarify the usefulness of process analytics in a healthcare setting. Healthcare utilization, such as diabetes education, is analyzed and compared with diabetes guidelines. Furthermore, we take a look at the organizational perspective and the central role of the GP. This research addresses four challenges: healthcare processes are often patient and hospital specific which leads to unique traces and unstructured processes; data is not recorded in the right format, with the right level of abstraction and time granularity; an overflow of medical activities may cloud the analysis; and analysts need to deal with data not recorded for this purpose. These challenges complicate the application of process analytics. It is explained how our methodology takes them into account. Process analytics offers new insights into the medical services patients follow, how medical resources relate to each other and whether patients and healthcare processes comply with guidelines and regulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Human is the Loop: New Directions for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren
2014-01-28
Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.
Analytic and heuristic processes in the detection and resolution of conflict.
Ferreira, Mário B; Mata, André; Donkin, Christopher; Sherman, Steven J; Ihmels, Max
2016-10-01
Previous research with the ratio-bias task found larger response latencies for conflict trials where the heuristic- and analytic-based responses are assumed to be in opposition (e.g., choosing between 1/10 and 9/100 ratios of success) when compared to no-conflict trials where both processes converge on the same response (e.g., choosing between 1/10 and 11/100). This pattern is consistent with parallel dual-process models, which assume that there is effective, rather than lax, monitoring of the output of heuristic processing. It is, however, unclear why conflict resolution sometimes fails. Ratio-biased choices may increase because of a decline in analytical reasoning (leaving heuristic-based responses unopposed) or to a rise in heuristic processing (making it more difficult for analytic processes to override the heuristic preferences). Using the process-dissociation procedure, we found that instructions to respond logically and response speed affected analytic (controlled) processing (C), leaving heuristic processing (H) unchanged, whereas the intuitive preference for large nominators (as assessed by responses to equal ratio trials) affected H but not C. These findings create new challenges to the debate between dual-process and single-process accounts, which are discussed.
Hyltoft Petersen, Per; Klee, George G
2014-03-20
Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2013 Elsevier B.V. All rights reserved.
Hyltoft Petersen, Per; Klee, George G
2014-05-15
Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2014. Published by Elsevier B.V.
An Analytical Assessment of NASA's N+1 Subsonic Fixed Wing Project Noise Goal
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.; Envia, Edmane; Burley, Casey L.
2009-01-01
The Subsonic Fixed Wing Project of NASA's Fundamental Aeronautics Program has adopted a noise reduction goal for new, subsonic, single-aisle, civil aircraft expected to replace current 737 and A320 airplanes. These so-called 'N+1' aircraft - designated in NASA vernacular as such since they will follow the current, in-service, 'N' airplanes - are hoped to achieve certification noise goal levels of 32 cumulative EPNdB under current Stage 4 noise regulations. A notional, N+1, single-aisle, twinjet transport with ultrahigh bypass ratio turbofan engines is analyzed in this study using NASA software and methods. Several advanced noise-reduction technologies are analytically applied to the propulsion system and airframe. Certification noise levels are predicted and compared with the NASA goal.
Focused analyte spray emission apparatus and process for mass spectrometric analysis
Roach, Patrick J [Kennewick, WA; Laskin, Julia [Richland, WA; Laskin, Alexander [Richland, WA
2012-01-17
An apparatus and process are disclosed that deliver an analyte deposited on a substrate to a mass spectrometer that provides for trace analysis of complex organic analytes. Analytes are probed using a small droplet of solvent that is formed at the junction between two capillaries. A supply capillary maintains the droplet of solvent on the substrate; a collection capillary collects analyte desorbed from the surface and emits analyte ions as a focused spray to the inlet of a mass spectrometer for analysis. The invention enables efficient separation of desorption and ionization events, providing enhanced control over transport and ionization of the analyte.
Awan, Muaaz Gul; Saeed, Fahad
2016-05-15
Modern proteomics studies utilize high-throughput mass spectrometers which can produce data at an astonishing rate. These big mass spectrometry (MS) datasets can easily reach peta-scale level creating storage and analytic problems for large-scale systems biology studies. Each spectrum consists of thousands of peaks which have to be processed to deduce the peptide. However, only a small percentage of peaks in a spectrum are useful for peptide deduction as most of the peaks are either noise or not useful for a given spectrum. This redundant processing of non-useful peaks is a bottleneck for streaming high-throughput processing of big MS data. One way to reduce the amount of computation required in a high-throughput environment is to eliminate non-useful peaks. Existing noise removing algorithms are limited in their data-reduction capability and are compute intensive making them unsuitable for big data and high-throughput environments. In this paper we introduce a novel low-complexity technique based on classification, quantization and sampling of MS peaks. We present a novel data-reductive strategy for analysis of Big MS data. Our algorithm, called MS-REDUCE, is capable of eliminating noisy peaks as well as peaks that do not contribute to peptide deduction before any peptide deduction is attempted. Our experiments have shown up to 100× speed up over existing state of the art noise elimination algorithms while maintaining comparable high quality matches. Using our approach we were able to process a million spectra in just under an hour on a moderate server. The developed tool and strategy has been made available to wider proteomics and parallel computing community and the code can be found at https://github.com/pcdslab/MSREDUCE CONTACT: : fahad.saeed@wmich.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Novel immunoassay formats for integrated microfluidic circuits: diffusion immunoassays (DIA)
NASA Astrophysics Data System (ADS)
Weigl, Bernhard H.; Hatch, Anson; Kamholz, Andrew E.; Yager, Paul
2000-03-01
Novel designs of integrated fluidic microchips allow separations, chemical reactions, and calibration-free analytical measurements to be performed directly in very small quantities of complex samples such as whole blood and contaminated environmental samples. This technology lends itself to applications such as clinical diagnostics, including tumor marker screening, and environmental sensing in remote locations. Lab-on-a-Chip based systems offer many *advantages over traditional analytical devices: They consume extremely low volumes of both samples and reagents. Each chip is inexpensive and small. The sampling-to-result time is extremely short. They perform all analytical functions, including sampling, sample pretreatment, separation, dilution, and mixing steps, chemical reactions, and detection in an integrated microfluidic circuit. Lab-on-a-Chip systems enable the design of small, portable, rugged, low-cost, easy to use, yet extremely versatile and capable diagnostic instruments. In addition, fluids flowing in microchannels exhibit unique characteristics ('microfluidics'), which allow the design of analytical devices and assay formats that would not function on a macroscale. Existing Lab-on-a-chip technologies work very well for highly predictable and homogeneous samples common in genetic testing and drug discovery processes. One of the biggest challenges for current Labs-on-a-chip, however, is to perform analysis in the presence of the complexity and heterogeneity of actual samples such as whole blood or contaminated environmental samples. Micronics has developed a variety of Lab-on-a-Chip assays that can overcome those shortcomings. We will now present various types of novel Lab- on-a-Chip-based immunoassays, including the so-called Diffusion Immunoassays (DIA) that are based on the competitive laminar diffusion of analyte molecules and tracer molecules into a region of the chip containing antibodies that target the analyte molecules. Advantages of this technique are a reduction in reagents, higher sensitivity, minimal preparation of complex samples such as blood, real-time calibration, and extremely rapid analysis.
Human Papillomavirus (HPV) Genotyping: Automation and Application in Routine Laboratory Testing
Torres, M; Fraile, L; Echevarria, JM; Hernandez Novoa, B; Ortiz, M
2012-01-01
A large number of assays designed for genotyping human papillomaviruses (HPV) have been developed in the last years. They perform within a wide range of analytical sensitivity and specificity values for the different viral types, and are used either for diagnosis, epidemiological studies, evaluation of vaccines and implementing and monitoring of vaccination programs. Methods for specific genotyping of HPV-16 and HPV-18 are also useful for the prevention of cervical cancer in screening programs. Some commercial tests are, in addition, fully or partially automated. Automation of HPV genotyping presents advantages such as the simplicity of the testing procedure for the operator, the ability to process a large number of samples in a short time, and the reduction of human errors from manual operations, allowing a better quality assurance and a reduction of cost. The present review collects information about the current HPV genotyping tests, with special attention to practical aspects influencing their use in clinical laboratories. PMID:23248734
Saratale, R G; Saratale, G D; Chang, J S; Govindwar, S P
2009-09-01
Micrococcus glutamicus NCIM-2168 exhibited complete decolorization and degradation of C.I. Reactive Green 19A (an initial concentration of 50 mg l(-1)) within 42 h at temperature 37 degrees C and pH 8, under static condition. Extent of mineralization was determined with total organic carbon (TOC) and chemical oxygen demand (COD) measurement, showing a satisfactory reduction of TOC (72%) and COD (66%) within 42 h. Enzyme studies shows involvement of oxidoreductive enzymes in decolorization/degradation process. Analytical studies of the extracted metabolites confirmed the significant degradation of Reactive Green 19A into various metabolites. The microbial toxicity and phytotoxicity assay revealed that the degradation of Reactive Green 19A produced nontoxic metabolites. In addition, the M. glutamicus strain was applied to decolorize a mixture of ten reactive dyes showing a 63% decolorization (in terms of decrease in ADMI value) within 72 h, along with 48% and 42% reduction in TOC and COD under static condition.
NASA Astrophysics Data System (ADS)
Ashrafizadeh, H.; McDonald, A.; Mertiny, P.
2016-02-01
Deposition of metallic coatings on elastomeric polymers is a challenging task due to the heat sensitivity and soft nature of these materials and the high temperatures in thermal spraying processes. In this study, a flame spraying process was employed to deposit conductive coatings of aluminum-12silicon on polyurethane elastomers. The effect of process parameters, i.e., stand-off distance and air added to the flame spray torch, on temperature distribution and corresponding effects on coating characteristics, including electrical resistivity, were investigated. An analytical model based on a Green's function approach was employed to determine the temperature distribution within the substrate. It was found that the coating porosity and electrical resistance decreased by increasing the pressure of the air injected into the flame spray torch during deposition. The latter also allowed for a reduction of the stand-off distance of the flame spray torch. Dynamic mechanical analysis was performed to investigate the effect of the increase in temperature within the substrate on its dynamic mechanical properties. It was found that the spraying process did not significantly change the storage modulus of the polyurethane substrate material.
Laboratory plant study on the melting process of asbestos waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakai, Shinichi; Terazono, Atsushi; Takatsuki, Hiroshi
The melting process was studied as a method of changing asbestos into non-hazardous waste and recovering it as a reusable resource. In an initial effort, the thermal behaviors of asbestos waste in terms of physical and chemical structure have been studied. Then, 10 kg/h-scale laboratory plant experiments were carried out. By X-ray diffraction analysis, the thermal behaviors of sprayed-on asbestos waste revealed that chrysotile asbestos waste change in crystal structure at around 800 C, and becomes melted slag, mainly composed of magnesium silicate, at around 1,500 C. Laboratory plant experiments on the melting process of sprayed-on asbestos have shown thatmore » melted slag can be obtained. X-ray diffraction analysis of the melted slag revealed crystal structure change, and SEM analysis showed the slag to have a non-fibrous form. And more, TEM analysis proved the very high treatment efficiency of the process, that is, reduction of the asbestos content to 1/10{sup 6} as a weight basis. These analytical results indicate the effectiveness of the melting process for asbestos waste treatment.« less
A genetic algorithm-based job scheduling model for big data analytics.
Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei
Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.
Du, Lihong; White, Robert L
2009-02-01
A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.
Analytic theory of alternate multilayer gratings operating in single-order regime.
Yang, Xiaowei; Kozhevnikov, Igor V; Huang, Qiushi; Wang, Hongchang; Hand, Matthew; Sawhney, Kawal; Wang, Zhanshan
2017-07-10
Using the coupled wave approach (CWA), we introduce the analytical theory for alternate multilayer grating (AMG) operating in the single-order regime, in which only one diffraction order is excited. Differing from previous study analogizing AMG to crystals, we conclude that symmetrical structure, or equal thickness of the two multilayer materials, is not the optimal design for AMG and may result in significant reduction in diffraction efficiency. The peculiarities of AMG compared with other multilayer gratings are analyzed. An influence of multilayer structure materials on diffraction efficiency is considered. The validity conditions of analytical theory are also discussed.
Midwives׳ clinical reasoning during second stage labour: Report on an interpretive study.
Jefford, Elaine; Fahy, Kathleen
2015-05-01
clinical reasoning was once thought to be the exclusive domain of medicine - setting it apart from 'non-scientific' occupations like midwifery. Poor assessment, clinical reasoning and decision-making skills are well known contributors to adverse outcomes in maternity care. Midwifery decision-making models share a common deficit: they are insufficiently detailed to guide reasoning processes for midwives in practice. For these reasons we wanted to explore if midwives actively engaged in clinical reasoning processes within their clinical practice and if so to what extent. The study was conducted using post structural, feminist methodology. to what extent do midwives engage in clinical reasoning processes when making decisions in the second stage labour? twenty-six practising midwives were interviewed. Feminist interpretive analysis was conducted by two researchers guided by the steps of a model of clinical reasoning process. Six narratives were excluded from analysis because they did not sufficiently address the research question. The midwives narratives were prepared via data reduction. A theoretically informed analysis and interpretation was conducted. using a feminist, interpretive approach we created a model of midwifery clinical reasoning grounded in the literature and consistent with the data. Thirteen of the 20 participant narratives demonstrate analytical clinical reasoning abilities but only nine completed the process and implemented the decision. Seven midwives used non-analytical decision-making without adequately checking against assessment data. over half of the participants demonstrated the ability to use clinical reasoning skills. Less than half of the midwives demonstrated clinical reasoning as their way of making decisions. The new model of Midwifery Clinical Reasoning includes 'intuition' as a valued way of knowing. Using intuition, however, should not replace clinical reasoning which promotes through decision-making can be made transparent and be consensually validated. Copyright © 2015 Elsevier Ltd. All rights reserved.
Raman Amplification and Tunable Pulse Delays in Silicon Waveguides
NASA Astrophysics Data System (ADS)
Rukhlenko, Ivan D.; Garanovich, Ivan L.; Premaratne, Malin; Sukhorukov, Andrey A.; Agrawal, Govind P.
2010-10-01
The nonlinear process of stimulated Raman scattering is important for silicon photonics as it enables optical amplification and lasing. However, generally employed numerical approaches provide very little insight into the contribution of different silicon Raman amplifier (SRA) parameters. In this paper, we solve the coupled pump-signal equations analytically and derive an exact formula for the envelope of a signal pulse when picosecond optical pulses are amplified inside a SRA pumped by a continuous-wave laser beam. Our solution is valid for an arbitrary pulse shape and fully accounts for the Raman gain-dispersion effects, including temporal broadening and group-velocity reduction. Our results are useful for optimizing the performance of SRAs and for engineering controllable signal delays.
Electron irradiation induced phase separation in a sodium borosilicate glass
NASA Astrophysics Data System (ADS)
Sun, K.; Wang, L. M.; Ewing, R. C.; Weber, W. J.
2004-06-01
Electron irradiation induced phase separation in a sodium borosilicate glass was studied in situ by analytical electron microscopy. Distinctly separate phases that are rich in boron and silicon formed at electron doses higher than 4.0 × 10 11 Gy during irradiation. The separated phases are still in amorphous states even at a much high dose (2.1 × 10 12 Gy). It indicates that most silicon atoms remain tetrahedrally coordinated in the glass during the entire irradiation period, except some possible reduction to amorphous silicon. The particulate B-rich phase that formed at high dose was identified as amorphous boron that may contain some oxygen. Both ballistic and ionization processes may contribute to the phase separation.
Puckett, L.J.; Cowdery, T.K.
2002-01-01
A combination of ground water modeling, chemical and dissolved gas analyses, and chlorofluorocarbon age dating of water was used to determine the relation between changes in agricultural practices, and NO3- concentrations in ground water of a glacial outwash aquifer in west-central Minnesota. The results revealed a redox zonation throughout the saturated zone with oxygen reduction occurring near the water table, NO3- reduction immediately below it, and then a large zone of ferric iron reduction, with a small area of sulfate (SO42-) reduction and methanogenesis (CH4) near the end of the transect. Analytical and NETPATH modeling results supported the hypothesis that organic carbon served as the electron donor for the redox reactions. Denitrification rates were quite small, 0.005 to 0.047 mmol NO3- yr-1, and were limited by the small amounts of organic carbon, 0.01 to 1.45%. In spite of the organic carbon limitation, denitrification was virtually complete because residence time is sufficient to allow even slow processes to reach completion. Ground water sample ages showed that maximum residence times were on the order of 50 to 70 yr. Reconstructed NO3- concentrations, estimated from measured NO3- and dissolved N gas showed that NO3- concentrations have been increasing in the aquifer since the 1940s, and have been above the 714 ??mol L-1 maximum contaminant level at most sites since the mid- to late-1960s. This increase in NO3- has been accompanied by a corresponding increase in agricultural use of fertilizer, identified as the major source of NO3- to the aquifer.
Smartphone-based low light detection for bioluminescence application.
Kim, Huisung; Jung, Youngkee; Doh, Iyll-Joon; Lozano-Mahecha, Roxana Andrea; Applegate, Bruce; Bae, Euiwon
2017-01-09
We report a smartphone-based device and associated imaging-processing algorithm to maximize the sensitivity of standard smartphone cameras, that can detect the presence of single-digit pW of radiant flux intensity. The proposed hardware and software, called bioluminescent-based analyte quantitation by smartphone (BAQS), provides an opportunity for onsite analysis and quantitation of luminescent signals from biological and non-biological sensing elements which emit photons in response to an analyte. A simple cradle that houses the smartphone, sample tube, and collection lens supports the measuring platform, while noise reduction by ensemble averaging simultaneously lowers the background and enhances the signal from emitted photons. Five different types of smartphones, both Android and iOS devices, were tested, and the top two candidates were used to evaluate luminescence from the bioluminescent reporter Pseudomonas fluorescens M3A. The best results were achieved by OnePlus One (android), which was able to detect luminescence from ~10 6 CFU/mL of the bio-reporter, which corresponds to ~10 7 photons/s with 180 seconds of integration time.
Smartphone-based low light detection for bioluminescence application
NASA Astrophysics Data System (ADS)
Kim, Huisung; Jung, Youngkee; Doh, Iyll-Joon; Lozano-Mahecha, Roxana Andrea; Applegate, Bruce; Bae, Euiwon
2017-01-01
We report a smartphone-based device and associated imaging-processing algorithm to maximize the sensitivity of standard smartphone cameras, that can detect the presence of single-digit pW of radiant flux intensity. The proposed hardware and software, called bioluminescent-based analyte quantitation by smartphone (BAQS), provides an opportunity for onsite analysis and quantitation of luminescent signals from biological and non-biological sensing elements which emit photons in response to an analyte. A simple cradle that houses the smartphone, sample tube, and collection lens supports the measuring platform, while noise reduction by ensemble averaging simultaneously lowers the background and enhances the signal from emitted photons. Five different types of smartphones, both Android and iOS devices, were tested, and the top two candidates were used to evaluate luminescence from the bioluminescent reporter Pseudomonas fluorescens M3A. The best results were achieved by OnePlus One (android), which was able to detect luminescence from ~106 CFU/mL of the bio-reporter, which corresponds to ~107 photons/s with 180 seconds of integration time.
Smartphone-based low light detection for bioluminescence application
Kim, Huisung; Jung, Youngkee; Doh, Iyll-Joon; Lozano-Mahecha, Roxana Andrea; Applegate, Bruce; Bae, Euiwon
2017-01-01
We report a smartphone-based device and associated imaging-processing algorithm to maximize the sensitivity of standard smartphone cameras, that can detect the presence of single-digit pW of radiant flux intensity. The proposed hardware and software, called bioluminescent-based analyte quantitation by smartphone (BAQS), provides an opportunity for onsite analysis and quantitation of luminescent signals from biological and non-biological sensing elements which emit photons in response to an analyte. A simple cradle that houses the smartphone, sample tube, and collection lens supports the measuring platform, while noise reduction by ensemble averaging simultaneously lowers the background and enhances the signal from emitted photons. Five different types of smartphones, both Android and iOS devices, were tested, and the top two candidates were used to evaluate luminescence from the bioluminescent reporter Pseudomonas fluorescens M3A. The best results were achieved by OnePlus One (android), which was able to detect luminescence from ~106 CFU/mL of the bio-reporter, which corresponds to ~107 photons/s with 180 seconds of integration time. PMID:28067287
NASA Astrophysics Data System (ADS)
Smargiasso, Nicolas; Quinton, Loic; de Pauw, Edwin
2012-03-01
One of the mechanisms leading to MALDI in-source decay (MALDI ISD) is the transfer of hydrogen radicals to analytes upon laser irradiation. Analytes such as peptides or proteins may undergo ISD and this method can therefore be exploited for top-down sequencing. When performed on peptides, radical-induced ISD results in production of c- and z-ions, as also found in ETD and ECD activation. Here, we describe two new compounds which, when used as MALDI matrices, are able to efficiently induce ISD of peptides and proteins: 2-aminobenzamide and 2-aminobenzoic acid. In-source reduction of the disulfide bridge containing peptide Calcitonin further confirmed the radicalar mechanism of the ISD process. ISD of peptides led, in addition to c- and z-ions, to the generation of a-, x-, and y-ions both in positive and in negative ion modes. Finally, good sequence coverage was obtained for the sequencing of myoglobin (17 kDa protein), confirming the effectiveness of both 2-aminobenzamide and 2-aminobenzoic acid as MALDI ISD matrices.
Smargiasso, Nicolas; Quinton, Loic; De Pauw, Edwin
2012-03-01
One of the mechanisms leading to MALDI in-source decay (MALDI ISD) is the transfer of hydrogen radicals to analytes upon laser irradiation. Analytes such as peptides or proteins may undergo ISD and this method can therefore be exploited for top-down sequencing. When performed on peptides, radical-induced ISD results in production of c- and z-ions, as also found in ETD and ECD activation. Here, we describe two new compounds which, when used as MALDI matrices, are able to efficiently induce ISD of peptides and proteins: 2-aminobenzamide and 2-aminobenzoic acid. In-source reduction of the disulfide bridge containing peptide Calcitonin further confirmed the radicalar mechanism of the ISD process. ISD of peptides led, in addition to c- and z-ions, to the generation of a-, x-, and y-ions both in positive and in negative ion modes. Finally, good sequence coverage was obtained for the sequencing of myoglobin (17 kDa protein), confirming the effectiveness of both 2-aminobenzamide and 2-aminobenzoic acid as MALDI ISD matrices.
A multiscale filter for noise reduction of low-dose cone beam projections.
Yao, Weiguang; Farr, Jonathan B
2015-08-21
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
NASA Astrophysics Data System (ADS)
García Plaza, E.; Núñez López, P. J.
2018-01-01
On-line monitoring of surface finish in machining processes has proven to be a substantial advancement over traditional post-process quality control techniques by reducing inspection times and costs and by avoiding the manufacture of defective products. This study applied techniques for processing cutting force signals based on the wavelet packet transform (WPT) method for the monitoring of surface finish in computer numerical control (CNC) turning operations. The behaviour of 40 mother wavelets was analysed using three techniques: global packet analysis (G-WPT), and the application of two packet reduction criteria: maximum energy (E-WPT) and maximum entropy (SE-WPT). The optimum signal decomposition level (Lj) was determined to eliminate noise and to obtain information correlated to surface finish. The results obtained with the G-WPT method provided an in-depth analysis of cutting force signals, and frequency ranges and signal characteristics were correlated to surface finish with excellent results in the accuracy and reliability of the predictive models. The radial and tangential cutting force components at low frequency provided most of the information for the monitoring of surface finish. The E-WPT and SE-WPT packet reduction criteria substantially reduced signal processing time, but at the expense of discarding packets with relevant information, which impoverished the results. The G-WPT method was observed to be an ideal procedure for processing cutting force signals applied to the real-time monitoring of surface finish, and was estimated to be highly accurate and reliable at a low analytical-computational cost.
Use of failure mode effect analysis (FMEA) to improve medication management process.
Jain, Khushboo
2017-03-13
Purpose Medication management is a complex process, at high risk of error with life threatening consequences. The focus should be on devising strategies to avoid errors and make the process self-reliable by ensuring prevention of errors and/or error detection at subsequent stages. The purpose of this paper is to use failure mode effect analysis (FMEA), a systematic proactive tool, to identify the likelihood and the causes for the process to fail at various steps and prioritise them to devise risk reduction strategies to improve patient safety. Design/methodology/approach The study was designed as an observational analytical study of medication management process in the inpatient area of a multi-speciality hospital in Gurgaon, Haryana, India. A team was made to study the complex process of medication management in the hospital. FMEA tool was used. Corrective actions were developed based on the prioritised failure modes which were implemented and monitored. Findings The percentage distribution of medication errors as per the observation made by the team was found to be maximum of transcription errors (37 per cent) followed by administration errors (29 per cent) indicating the need to identify the causes and effects of their occurrence. In all, 11 failure modes were identified out of which major five were prioritised based on the risk priority number (RPN). The process was repeated after corrective actions were taken which resulted in about 40 per cent (average) and around 60 per cent reduction in the RPN of prioritised failure modes. Research limitations/implications FMEA is a time consuming process and requires a multidisciplinary team which has good understanding of the process being analysed. FMEA only helps in identifying the possibilities of a process to fail, it does not eliminate them, additional efforts are required to develop action plans and implement them. Frank discussion and agreement among the team members is required not only for successfully conducing FMEA but also for implementing the corrective actions. Practical implications FMEA is an effective proactive risk-assessment tool and is a continuous process which can be continued in phases. The corrective actions taken resulted in reduction in RPN, subjected to further evaluation and usage by others depending on the facility type. Originality/value The application of the tool helped the hospital in identifying failures in medication management process, thereby prioritising and correcting them leading to improvement.
NASA Astrophysics Data System (ADS)
Kröll, L.; de Haart, L. G. J.; Vinke, I.; Eichel, R.-A.
2017-04-01
The microstructural evolution of a porous electrode consisting of a metal-ceramic matrix, consisting of nickel and yttria-stabilized zirconia (Y S Z ), is one of the main degradation mechanisms in a solid-oxide cell (SOC), in either fuel cell or electrolyzer mode. In that respect, the agglomeration of nickel particles in a SOC electrode leads to a decrease in the electronic conductivity as well as in the active catalytic area for the oxidation-reduction reaction of the fuel-water steam. An analytical model of the agglomeration behavior of a Ni /Y S Z electrode is proposed that allows for a quantitative description of the nickel agglomeration. The accuracy of the model is validated in terms of a comparison with experimental degradation measurements. The model is based on contact probabilities of nickel clusters in a porous network of nickel and Y S Z , derived from an algorithm of the agglomeration process. The iterative algorithm is converted into an analytical function, which involves structural parameters of the electrode, such as the porosity and the nickel content. Furthermore, to describe the agglomeration mechanism, the influence of the steam content and the flux rate are taken into account via reactions on the nickel surface. In the next step, the developed agglomeration model is combined with the mechanism of the Ostwald ripening. The calculated grain-size growth is compared to measurements at different temperatures and under low flux rates and low steam content, as well as under high flux rates and high steam content. The results confirm the necessity of connecting the two mechanisms and clarify the circumstances in which the single processes occur and how they contribute to the total agglomeration of the particles in the electrode.
ERIC Educational Resources Information Center
Temel, Senar
2016-01-01
This study aims to analyse prospective chemistry teachers' cognitive structures related to the subject of oxidation and reduction through a flow map method. Purposeful sampling method was employed in this study, and 8 prospective chemistry teachers from a group of students who had taken general chemistry and analytical chemistry courses were…
Peter J. Daugherty; Jeremy S. Fried
2007-01-01
Landscape-scale fuel treatments for forest fire hazard reduction potentially produce large quantities of material suitable for biomass energy production. The analytic framework FIA BioSum addresses this situation by developing detailed data on forest conditions and production under alternative fuel treatment prescriptions, and computes haul costs to alternative sites...
A quality by design study applied to an industrial pharmaceutical fluid bed granulation.
Lourenço, Vera; Lochmann, Dirk; Reich, Gabriele; Menezes, José C; Herdling, Thorsten; Schewitz, Jens
2012-06-01
The pharmaceutical industry is encouraged within Quality by Design (QbD) to apply science-based manufacturing principles to assure quality not only of new but also of existing processes. This paper presents how QbD principles can be applied to an existing industrial pharmaceutical fluid bed granulation (FBG) process. A three-step approach is presented as follows: (1) implementation of Process Analytical Technology (PAT) monitoring tools at the industrial scale process, combined with multivariate data analysis (MVDA) of process and PAT data to increase the process knowledge; (2) execution of scaled-down designed experiments at a pilot scale, with adequate PAT monitoring tools, to investigate the process response to intended changes in Critical Process Parameters (CPPs); and finally (3) the definition of a process Design Space (DS) linking CPPs to Critical to Quality Attributes (CQAs), within which product quality is ensured by design, and after scale-up enabling its use at the industrial process scale. The proposed approach was developed for an existing industrial process. Through enhanced process knowledge established a significant reduction in product CQAs, variability already within quality specifications ranges was achieved by a better choice of CPPs values. The results of such step-wise development and implementation are described. Copyright © 2012 Elsevier B.V. All rights reserved.
Distributed Aerodynamic Sensing and Processing Toolbox
NASA Technical Reports Server (NTRS)
Brenner, Martin; Jutte, Christine; Mangalam, Arun
2011-01-01
A Distributed Aerodynamic Sensing and Processing (DASP) toolbox was designed and fabricated for flight test applications with an Aerostructures Test Wing (ATW) mounted under the fuselage of an F-15B on the Flight Test Fixture (FTF). DASP monitors and processes the aerodynamics with the structural dynamics using nonintrusive, surface-mounted, hot-film sensing. This aerodynamic measurement tool benefits programs devoted to static/dynamic load alleviation, body freedom flutter suppression, buffet control, improvement of aerodynamic efficiency through cruise control, supersonic wave drag reduction through shock control, etc. This DASP toolbox measures local and global unsteady aerodynamic load distribution with distributed sensing. It determines correlation between aerodynamic observables (aero forces) and structural dynamics, and allows control authority increase through aeroelastic shaping and active flow control. It offers improvements in flutter suppression and, in particular, body freedom flutter suppression, as well as aerodynamic performance of wings for increased range/endurance of manned/ unmanned flight vehicles. Other improvements include inlet performance with closed-loop active flow control, and development and validation of advanced analytical and computational tools for unsteady aerodynamics.
NASA Astrophysics Data System (ADS)
Kowalik, Marek; Trzepiecinski, Tomasz
2018-05-01
This paper presents the characteristics of the process of longitudinal rolling of shafts and the geometry of the working section of forming rollers with a secant profile. In addition, the analytical formulae defining the geometry of a roller profile were determined. The experiments were carried out on shafts made of S235JR and C45 structural steels and the MSC.Marc + Mentat program was used for the numerical analysis of the rolling process based on the finite element method. The paper analyses the effect of roller geometry on the changes in value of the widening coefficient and the diameter reduction coefficient for the first forming passage. It was found that the mechanical properties of the shaft material have a slight influence on the widening coefficient. The value of the widening coefficient of the shaft increases with increase in the initial diameter of the shaft. Increasing shaft diameter causes an increase of strain gradient on the cross-section of the shaft.
Permissibility of Multifetal Pregnancy Reduction from The Shiite Point of View
Zabihi Bidgoli, Atefeh; Ardbili, Faezeh Azimzadeh
2017-01-01
Background Advancements in medical technology have significantly increased the possibility of successful infertility treatment. Medical interventions in the initial process of pregnancy that intend to increase the chances of pregnancy create the risk of multifetal pregnancies for both mothers and fetuses. Physicians attempt to reduce the numbers of fetuses in order to decrease this risk and guarantee the continuation of pregnancy. The aim of this paper is to understand the Shiite instruction in terms of the risks multifetal pregnancies have for fetuses and if it is permissible to reduce the numbers of fetuses. An affirmative answer will lead to the development of Islamic criteria for reduction of the number of embryos. Materials and Methods This analytical-descriptive research gathered relevant data as a literature search. We reviewed a number of Islamic resources that pertained to the fetus; after a description of the fundamentals and definitions, we subsequently analyzed juridical texts. The order of reduction was inevitably determined by taking into consideration the rules that governed the abortion provisions or general juridical rules. We also investigated the UK law as a comparison to the Shiite perspective. Results The primary ordinance states that termination of an embryo is not permitted and is considered taboo. However, fetal reductions that occur in emergency situations where there is no option or ordinary indication are permitted before the time of ensoulment. The goal of reduction can be chosen from different ways. Conclusion According to Shiite sources, fetal reduction is permitted. Defective fetuses are the criteria for selective reduction. If none are defective, the criteria are possibility and facility. But if the possibility of selection is equally for more than one fetus, the criterion is importance (for example one fetus is healthier). PMID:28042419
Permissibility of Multifetal Pregnancy Reduction from The Shiite Point of View.
Zabihi Bidgoli, Atefeh; Ardbili, Faezeh Azimzadeh
2017-01-01
Advancements in medical technology have significantly increased the possibility of successful infertility treatment. Medical interventions in the initial process of pregnancy that intend to increase the chances of pregnancy create the risk of multifetal pregnancies for both mothers and fetuses. Physicians attempt to reduce the numbers of fetuses in order to decrease this risk and guarantee the continuation of pregnancy. The aim of this paper is to understand the Shiite instruction in terms of the risks multifetal pregnancies have for fetuses and if it is permissible to reduce the numbers of fetuses. An affirmative answer will lead to the development of Islamic criteria for reduction of the number of embryos. This analytical-descriptive research gathered relevant data as a literature search. We reviewed a number of Islamic resources that pertained to the fetus; after a description of the fundamentals and definitions, we subsequently analyzed juridical texts. The order of reduction was inevitably determined by taking into consideration the rules that governed the abortion provisions or general juridical rules. We also investigated the UK law as a comparison to the Shiite perspective. The primary ordinance states that termination of an embryo is not permitted and is considered taboo. However, fetal reductions that occur in emergency situations where there is no option or ordinary indication are permitted before the time of ensoulment. The goal of reduction can be chosen from different ways. According to Shiite sources, fetal reduction is permitted. Defective fetuses are the criteria for selective reduction. If none are defective, the criteria are possibility and facility. But if the possibility of selection is equally for more than one fetus, the criterion is importance (for example one fetus is healthier).
DNA biosensing with 3D printing technology.
Loo, Adeline Huiling; Chua, Chun Kiang; Pumera, Martin
2017-01-16
3D printing, an upcoming technology, has vast potential to transform conventional fabrication processes due to the numerous improvements it can offer to the current methods. To date, the employment of 3D printing technology has been examined for applications in the fields of engineering, manufacturing and biological sciences. In this study, we examined the potential of adopting 3D printing technology for a novel application, electrochemical DNA biosensing. Metal 3D printing was utilized to construct helical-shaped stainless steel electrodes which functioned as a transducing platform for the detection of DNA hybridization. The ability of electroactive methylene blue to intercalate into the double helix structure of double-stranded DNA was then exploited to monitor the DNA hybridization process, with its inherent reduction peak serving as an analytical signal. The designed biosensing approach was found to demonstrate superior selectivity against a non-complementary DNA target, with a detection range of 1-1000 nM.
NASA Technical Reports Server (NTRS)
Johnson, W. S.; Bigelow, C. A.; Bahei-El-din, Y. A.
1983-01-01
Experimental results for five laminate orientations of boron/aluminum composites containing either circular holes or crack-like slits are presented. Specimen stress-strain behavior, stress at first fiber failure, and ultimate strength were determined. Radiographs were used to monitor the fracture process. The specimens were analyzed with a three-dimensional elastic-elastic finite-element model. The first fiber failures in notched specimens with laminate orientation occurred at or very near the specimen ultimate strength. For notched unidirectional specimens, the first fiber failure occurred at approximately one-half of the specimen ultimate strength. Acoustic emission events correlated with fiber breaks in unidirectional composites, but did not for other laminates. Circular holes and crack-like slits of the same characteristic length were found to produce approximately the same strength reduction. The predicted stress-strain responses and stress at first fiber failure compared very well with test data for laminates containing 0 deg fibers.
International law, public health, and the meanings of pharmaceuticalization
Cloatre, Emilie; Pickersgill, Martyn
2014-01-01
Recent social science scholarship has employed the term “pharmaceuticalization” in analyses of the production, circulation and use of drugs. In this paper, we seek to open up further discussion of the scope, limits and potential of this as an analytical device through consideration of the role of law and legal processes in directing pharmaceutical flows. To do so, we synthesize a range of empirical and conceptual work concerned with the relationships between access to medicines and intellectual property law. This paper suggests that alongside documenting the expansion or reduction in demand for particular drugs, analysts of pharmaceuticalization attend to the ways in which socio-legal developments change (or not) the identities of drugs, and the means through which they circulate and come to be used by states and citizens. Such scholarship has the potential to more precisely locate the biopolitical processes that shape international agendas and targets, form markets, and produce health. PMID:25431535
NASA Technical Reports Server (NTRS)
Lin, Yi; Bunker, Christopher E.; Fernandos, K. A. Shiral; Connell, John W.
2012-01-01
The impurity-free aqueous dispersions of boron nitride nanosheets (BNNS) allowed the facile preparation of silver (Ag) nanoparticle-decorated BNNS by chemical reduction of an Ag salt with hydrazine in the presence of BNNS. The resultant Ag-BNNS nanohybrids remained dispersed in water, allowing convenient subsequent solution processing. By using substrate transfer techniques, Ag-BNNS nanohybrid thin film coatings on quartz substrates were prepared and evaluated as reusable surface enhanced Raman spectroscopy (SERS) sensors that were robust against repeated solvent washing. In addition, because of the unique thermal oxidation-resistant properties of the BNNS, the sensor devices may be readily recycled by short-duration high temperature air oxidation to remove residual analyte molecules in repeated runs. The limiting factor associated with the thermal oxidation recycling process was the Ostwald ripening effect of Ag nanostructures.
Evaluation of two methods to determine glyphosate and AMPA in soils of Argentina
NASA Astrophysics Data System (ADS)
De Geronimo, Eduardo; Lorenzon, Claudio; Iwasita, Barbara; Faggioli, Valeria; Aparicio, Virginia; Costa, Jose Luis
2017-04-01
Argentine agricultural production is fundamentally based on a technological package combining no-tillage and the dependence of glyphosate applications to control weeds in transgenic crops (soybean, maize and cotton). Therefore, glyphosate is the most employed herbicide in the country, where 180 to 200 million liters are applied every year. Due to its widespread use, it is important to assess its impact on the environment and, therefore, reliable analytical methods are mandatory. Glyphosate molecule exhibits unique physical and chemical characteristics which difficult its quantification, especially in soils with high organic matter content, such as the central eastern Argentine soils, where strong interferences are normally observed. The objective of this work was to compare two methods for extraction and quantification of glyphosate and AMPA in samples of 8 representative soils of Argentina. The first analytical method (method 1) was based on the use of phosphate buffer as extracting solution and dichloromethane to minimize matrix organic content. In the second method (method 2), potassium hydroxide was used to extract the analytes followed by a clean-up step using solid phase extraction (SPE) to minimize strong interferences. Sensitivity, recoveries, matrix effects and robustness were evaluated. Both methodologies involved a derivatization with 9-fluorenyl-methyl-chloroformate (FMOC) in borate buffer and detection based on ultra-high-pressure liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS). Recoveries obtained from soil samples spiked at 0.1 and 1 mg kg-1 and were satisfactory in both methods (70% - 120%). However, there was a remarkable difference regarding the matrix effect, being the SPE clean-up step (method 2) insufficient to remove the interferences. Whereas the dilution and the clean-up with dichloromethane (method 1) were more effective minimizing the ionic suppression. Moreover, method 1 had fewer steps in the protocol of sample processing than method 2. This can be highly valuable in the routine lab work due to the reduction of potential undesired errors such as the loss of analyte or sample contamination. In addition, the substitution of SPE by another alternative involved a considerable reduction of analytical costs in method 1. We conclude that method 1 seemed to be simpler and cheaper than method 2, as well as reliable to quantify glyphosate in Argentinean soils. We hope that this experience can be useful to simplify the protocols of glyphosate quantification and contribute to the understanding of the fate of this herbicide in the environment.
Kora, Aruna Jyothi; Rastogi, Lori
2016-10-01
A facile and green method for the reduction of selenite was developed using a Gram-negative bacterial strain Pseudomonas aeruginosa, under aerobic conditions. During the process of bacterial conversion, the elemental selenium nanoparticles were produced. These nanoparticles were systematically characterized using various analytical techniques including UV-visible spectroscopy, XRD, Raman spectroscopy, SEM, DLS, TEM and FTIR spectroscopy techniques. The generation of selenium nanoparticles was confirmed from the appearance of red colour in the culture broth and broad absorption peaks in the UV-vis. The synthesized nanoparticles were spherical, polydisperse, ranged from 47 to 165 nm and the average particle size was about 95.9 nm. The selected-area electron diffraction, XRD patterns; and Raman spectroscopy established the amorphous nature of the fabricated nanoparticles. The IR data demonstrated the bacterial protein mediated selenite reduction and capping of the produced nanoparticles. The selenium removal was assessed at different selenite concentrations using ICP-OES and the results showed that the tested bacterial strain exhibited significant selenite reduction activity. The results demonstrate the possible application of P. aeruginosa for bioremediation of waters polluted with toxic and soluble selenite. Moreover, the potential metal reduction capability of the bacterial strain can function as green method for aerobic generation of selenium nanospheres. Copyright © 2016 Elsevier Ltd. All rights reserved.
Xie, Huamu; Ben-Zvi, Ilan; Rao, Triveni; ...
2016-10-19
High-average-current, high-brightness electron sources have important applications, such as in high-repetition-rate free-electron lasers, or in the electron cooling of hadrons. Bialkali photocathodes are promising high-quantum-efficiency (QE) cathode materials, while superconducting rf (SRF) electron guns offer continuous-mode operation at high acceleration, as is needed for high-brightness electron sources. Thus, we must have a comprehensive understanding of the performance of bialkali photocathode at cryogenic temperatures when they are to be used in SRF guns. To remove the heat produced by the radio-frequency field in these guns, the cathode should be cooled to cryogenic temperatures.We recorded an 80% reduction of the QE uponmore » cooling the K 2CsSb cathode from room temperature down to the temperature of liquid nitrogen in Brookhaven National Laboratory (BNL)’s 704 MHz SRF gun.We conducted several experiments to identify the underlying mechanism in this reduction. The change in the spectral response of the bialkali photocathode, when cooled from room temperature (300 K) to 166 K, suggests that a change in the ionization energy (defined as the energy gap from the top of the valence band to vacuum level) is the main reason for this reduction.We developed an analytical model of the process, based on Spicer’s three-step model. The change in ionization energy, with falling temperature, gives a simplified description of the QE’s temperature dependence.We also developed a 2D Monte Carlo code to simulate photoemission that accounts for the wavelength-dependent photon absorption in the first step, the scattering and diffusion in the second step, and the momentum conservation in the emission step. From this simulation, we established a correlation between ionization energy and reduction in the QE. The simulation yielded results comparable to those from the analytical model. The simulation offers us additional capabilities such as calculation of the intrinsic emittance, the temporal response, and the thickness dependence of the QE for the K 2CsSb photocathode.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Huamu; Ben-Zvi, Ilan; Rao, Triveni
High-average-current, high-brightness electron sources have important applications, such as in high-repetition-rate free-electron lasers, or in the electron cooling of hadrons. Bialkali photocathodes are promising high-quantum-efficiency (QE) cathode materials, while superconducting rf (SRF) electron guns offer continuous-mode operation at high acceleration, as is needed for high-brightness electron sources. Thus, we must have a comprehensive understanding of the performance of bialkali photocathode at cryogenic temperatures when they are to be used in SRF guns. To remove the heat produced by the radio-frequency field in these guns, the cathode should be cooled to cryogenic temperatures.We recorded an 80% reduction of the QE uponmore » cooling the K 2CsSb cathode from room temperature down to the temperature of liquid nitrogen in Brookhaven National Laboratory (BNL)’s 704 MHz SRF gun.We conducted several experiments to identify the underlying mechanism in this reduction. The change in the spectral response of the bialkali photocathode, when cooled from room temperature (300 K) to 166 K, suggests that a change in the ionization energy (defined as the energy gap from the top of the valence band to vacuum level) is the main reason for this reduction.We developed an analytical model of the process, based on Spicer’s three-step model. The change in ionization energy, with falling temperature, gives a simplified description of the QE’s temperature dependence.We also developed a 2D Monte Carlo code to simulate photoemission that accounts for the wavelength-dependent photon absorption in the first step, the scattering and diffusion in the second step, and the momentum conservation in the emission step. From this simulation, we established a correlation between ionization energy and reduction in the QE. The simulation yielded results comparable to those from the analytical model. The simulation offers us additional capabilities such as calculation of the intrinsic emittance, the temporal response, and the thickness dependence of the QE for the K 2CsSb photocathode.« less
Rocchitta, Gaia; Spanu, Angela; Babudieri, Sergio; Latte, Gavinella; Madeddu, Giordano; Galleri, Grazia; Nuvoli, Susanna; Bagella, Paola; Demartis, Maria Ilaria; Fiore, Vito; Manetti, Roberto; Serra, Pier Andrea
2016-01-01
Enzyme-based chemical biosensors are based on biological recognition. In order to operate, the enzymes must be available to catalyze a specific biochemical reaction and be stable under the normal operating conditions of the biosensor. Design of biosensors is based on knowledge about the target analyte, as well as the complexity of the matrix in which the analyte has to be quantified. This article reviews the problems resulting from the interaction of enzyme-based amperometric biosensors with complex biological matrices containing the target analyte(s). One of the most challenging disadvantages of amperometric enzyme-based biosensor detection is signal reduction from fouling agents and interference from chemicals present in the sample matrix. This article, therefore, investigates the principles of functioning of enzymatic biosensors, their analytical performance over time and the strategies used to optimize their performance. Moreover, the composition of biological fluids as a function of their interaction with biosensing will be presented. PMID:27249001
The areal reduction factor: A new analytical expression for the Lazio Region in central Italy
NASA Astrophysics Data System (ADS)
Mineo, C.; Ridolfi, E.; Napolitano, F.; Russo, F.
2018-05-01
For the study and modeling of hydrological phenomena, both in urban and rural areas, a proper estimation of the areal reduction factor (ARF) is crucial. In this paper, we estimated the ARF from observed rainfall data as the ratio between the average rainfall occurring in a specific area and the point rainfall. Then, we compared the obtained ARF values with some of the most widespread empirical approaches in literature which are used when rainfall observations are not available. Results highlight that the literature formulations can lead to a substantial over- or underestimation of the ARF estimated from observed data. These findings can have severe consequences, especially in the design of hydraulic structures where empirical formulations are extensively applied. The aim of this paper is to present a new analytical relationship with an explicit dependence on the rainfall duration and area that can better represent the ARF-area trend over the area case of study. The analytical curve presented here can find an important application to estimate the ARF values for design purposes. The test study area is the Lazio Region (central Italy).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y. B.; Zhu, X. W., E-mail: xiaowuzhu1026@znufe.edu.cn; Dai, H. H.
Though widely used in modelling nano- and micro- structures, Eringen’s differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen’s two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings aremore » considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.« less
Engine isolation for structural-borne interior noise reduction in a general aviation aircraft
NASA Technical Reports Server (NTRS)
Unruh, J. F.; Scheidt, D. C.
1981-01-01
Engine vibration isolation for structural-borne interior noise reduction is investigated. A laboratory based test procedure to simulate engine induced structure-borne noise transmission, the testing of a range of candidate isolators for relative performance data, and the development of an analytical model of the transmission phenomena for isolator design evaluation are addressed. The isolator relative performance test data show that the elastomeric isolators do not appear to operate as single degree of freedom systems with respect to noise isolation. Noise isolation beyond 150 Hz levels off and begins to decrease somewhat above 600 Hz. Coupled analytical and empirical models were used to study the structure-borne noise transmission phenomena. Correlation of predicted results with measured data show that (1) the modeling procedures are reasonably accurate for isolator design evaluation, (2) the frequency dependent properties of the isolators must be included in the model if reasonably accurate noise prediction beyond 150 Hz is desired. The experimental and analytical studies were carried out in the frequency range from 10 Hz to 1000 Hz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Jordana R.; Gill, Gary A.; Kuo, Li-Jung
2016-04-20
Trace element determinations in seawater by inductively coupled plasma mass spectrometry are analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. In this study, we did a comparison for uranium analysis using inductively coupled plasma mass spectrometry (ICP-MS) of Sequim Bay seawater samples and three seawater certified reference materials (SLEW-3, CASS-5 and NASS-6) using seven different analytical approaches. The methods evaluated include: direct analysis, Fe/Pd reductive precipitation, standard addition calibration, online automated dilution using an external calibration with and without matrix matching, and online automated pre-concentration. The methodmore » which produced the most accurate results was the method of standard addition calibration, recovering uranium from a Sequim Bay seawater sample at 101 ± 1.2%. The on-line preconcentration method and the automated dilution with matrix-matched calibration method also performed well. The two least effective methods were the direct analysis and the Fe/Pd reductive precipitation using sodium borohydride« less
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1995-01-01
A FORTRAN computer code for the reduction and analysis of experimental heat transfer data has been developed. This code can be utilized to determine heat transfer rates from surface temperature measurements made using either thin-film resistance gages or coaxial surface thermocouples. Both an analytical and a numerical finite-volume heat transfer model are implemented in this code. The analytical solution is based on a one-dimensional, semi-infinite wall thickness model with the approximation of constant substrate thermal properties, which is empirically corrected for the effects of variable thermal properties. The finite-volume solution is based on a one-dimensional, implicit discretization. The finite-volume model directly incorporates the effects of variable substrate thermal properties and does not require the semi-finite wall thickness approximation used in the analytical model. This model also includes the option of a multiple-layer substrate. Fast, accurate results can be obtained using either method. This code has been used to reduce several sets of aerodynamic heating data, of which samples are included in this report.
Wong, Quincy J J; Moulds, Michelle L
2012-12-01
Evidence from the depression literature suggests that an analytical processing mode adopted during repetitive thinking leads to maladaptive outcomes relative to an experiential processing mode. To date, in socially anxious individuals, the impact of processing mode during repetitive thinking related to an actual social-evaluative situation has not been investigated. We thus tested whether an analytical processing mode would be maladaptive relative to an experiential processing mode during anticipatory processing and post-event rumination. High and low socially anxious participants were induced to engage in either an analytical or experiential processing mode during: (a) anticipatory processing before performing a speech (Experiment 1; N = 94), or (b) post-event rumination after performing a speech (Experiment 2; N = 74). Mood, cognition, and behavioural measures were employed to examine the effects of processing mode. For high socially anxious participants, the modes had a similar effect on self-reported anxiety during both anticipatory processing and post-event rumination. Unexpectedly, relative to the analytical mode, the experiential mode led to stronger high standard and conditional beliefs during anticipatory processing, and stronger unconditional beliefs during post-event rumination. These experiments are the first to investigate processing mode during anticipatory processing and post-event rumination. Hence, these results are novel and will need to be replicated. These findings suggest that an experiential processing mode is maladaptive relative to an analytical processing mode during repetitive thinking characteristic of socially anxious individuals. Copyright © 2012 Elsevier Ltd. All rights reserved.
An analytical study of thermal barrier coated first stage blades in a JT9D engine
NASA Technical Reports Server (NTRS)
Sevcik, W. R.; Stoner, B. L.
1978-01-01
Steady state and transient heat transfer and structural calculations were completed to determine the coating and base alloy temperatures and strains. Results indicate potential for increased turbine life using thin durable thermal barrier coatings on turbine airfoils due to a significant reduction in blade average and maximum temperatures, and alloy strain range. An intepretation of the analytical results is compared to the experimental engine test data.
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
Jaskolla, Thorsten W; Karas, Michael
2011-06-01
This work experimentally verifies and proves the two long since postulated matrix-assisted laser desorption/ionization (MALDI) analyte protonation pathways known as the Lucky Survivor and the gas phase protonation model. Experimental differentiation between the predicted mechanisms becomes possible by the use of deuterated matrix esters as MALDI matrices, which are stable under typical sample preparation conditions and generate deuteronated reagent ions, including the deuterated and deuteronated free matrix acid, only upon laser irradiation in the MALDI process. While the generation of deuteronated analyte ions proves the gas phase protonation model, the detection of protonated analytes by application of deuterated matrix compounds without acidic hydrogens proves the survival of analytes precharged from solution in accordance with the predictions from the Lucky Survivor model. The observed ratio of the two analyte ionization processes depends on the applied experimental parameters as well as the nature of analyte and matrix. Increasing laser fluences and lower matrix proton affinities favor gas phase protonation, whereas more quantitative analyte protonation in solution and intramolecular ion stabilization leads to more Lucky Survivors. The presented results allow for a deeper understanding of the fundamental processes causing analyte ionization in MALDI and may alleviate future efforts for increasing the analyte ion yield.
NASA Astrophysics Data System (ADS)
Nelson, Sheldon
2013-04-01
Nitrate Remediation of Soil and Groundwater Using Phytoremediation: Transfer of Nitrogen Containing Compounds from the Subsurface to Surface Vegetation Sheldon Nelson Chevron Energy Technology Company 6001 Bollinger Canyon Road San Ramon, California 94583 snne@chevron.com The basic concept of using a plant-based remedial approach (phytoremediation) for nitrogen containing compounds is the incorporation and transformation of the inorganic nitrogen from the soil and/or groundwater (nitrate, ammonium) into plant biomass, thereby removing the constituent from the subsurface. There is a general preference in many plants for the ammonium nitrogen form during the early growth stage, with the uptake and accumulation of nitrate often increasing as the plant matures. The synthesis process refers to the variety of biochemical mechanisms that use ammonium or nitrate compounds to primarily form plant proteins, and to a lesser extent other nitrogen containing organic compounds. The shallow soil at the former warehouse facility test site is impacted primarily by elevated concentrations of nitrate, with a minimal presence of ammonium. Dissolved nitrate (NO3-) is the primary dissolved nitrogen compound in on-site groundwater, historically reaching concentrations of 1000 mg/L. The initial phases of the project consisted of the installation of approximately 1750 trees, planted in 10-foot centers in the areas impacted by nitrate and ammonia in the shallow soil and groundwater. As of the most recent groundwater analytical data, dissolved nitrate reductions of 40% to 96% have been observed in monitor wells located both within, and immediately downgradient of the planted area. In summary, an evaluation of time series groundwater analytical data from the initial planted groves suggests that the trees are an effective means of transfering nitrogen compounds from the subsurface to overlying vegetation. The mechanism of concentration reduction may be the uptake of residual nitrate from the vadose zone, the direct uptake of dissolved constituent from the upper portion of the saturated zone/capillary fringe, or a combination of these two processes.
Communication — Modeling polymer-electrolyte fuel-cell agglomerates with double-trap kinetics
Pant, Lalit M.; Weber, Adam Z.
2017-04-14
A new semi-analytical agglomerate model is presented for polymer-electrolyte fuel-cell cathodes. The model uses double-trap kinetics for the oxygen-reduction reaction, which can capture the observed potential-dependent coverage and Tafel-slope changes. An iterative semi-analytical approach is used to obtain reaction rate constants from the double-trap kinetics, oxygen concentration at the agglomerate surface, and overall agglomerate reaction rate. The analytical method can predict reaction rates within 2% of the numerically simulated values for a wide range of oxygen concentrations, overpotentials, and agglomerate sizes, while saving simulation time compared to a fully numerical approach.
Dissipative nonlinear waves in a gravitating quantum fluid
NASA Astrophysics Data System (ADS)
Sahu, Biswajit; Sinha, Anjana; Roychoudhury, Rajkumar
2018-02-01
Nonlinear wave propagation is studied in a dissipative, self-gravitating Bose-Einstein condensate, starting from the Gross-Pitaevskii equation. In the absence of an exact analytical result, approximate methods like the linear analysis and perturbative approach are applied. The linear dispersion relation puts a restriction on the permissible range of the dissipation parameter. The waves get damped due to dissipation. The small amplitude analysis using reductive perturbation technique is found to yield a modified form of KdV equation, which is solved both analytically as well as numerically. Interestingly, the analytical and numerical plots match excellently with each other, in the realm of weak dissipation.
NASA Astrophysics Data System (ADS)
Su, Yung-Chao; Wu, Shin-Tza
2017-09-01
We study theoretically the teleportation of a controlled-phase (cz) gate through measurement-based quantum-information processing for continuous-variable systems. We examine the degree of entanglement in the output modes of the teleported cz-gate for two classes of resource states: the canonical cluster states that are constructed via direct implementations of two-mode squeezing operations and the linear-optical version of cluster states which are built from linear-optical networks of beam splitters and phase shifters. In order to reduce the excess noise arising from finite-squeezed resource states, teleportation through resource states with different multirail designs will be considered and the enhancement of entanglement in the teleported cz gates will be analyzed. For multirail cluster with an arbitrary number of rails, we obtain analytical expressions for the entanglement in the output modes and analyze in detail the results for both classes of resource states. At the same time, we also show that for uniformly squeezed clusters the multirail noise reduction can be optimized when the excess noise is allocated uniformly to the rails. To facilitate the analysis, we develop a trick with manipulations of quadrature operators that can reveal rather efficiently the measurement sequence and corrective operations needed for the measurement-based gate teleportation, which will also be explained in detail.
Dvorski, Sabine E-M; Gonsior, Michael; Hertkorn, Norbert; Uhl, Jenny; Müller, Hubert; Griebler, Christian; Schmitt-Kopplin, Philippe
2016-06-07
At numerous groundwater sites worldwide, natural dissolved organic matter (DOM) is quantitatively complemented with petroleum hydrocarbons. To date, research has been focused almost exclusively on the contaminants, but detailed insights of the interaction of contaminant biodegradation, dominant redox processes, and interactions with natural DOM are missing. This study linked on-site high resolution spatial sampling of groundwater with high resolution molecular characterization of DOM and its relation to groundwater geochemistry across a petroleum hydrocarbon plume cross-section. Electrospray- and atmospheric pressure photoionization (ESI, APPI) ultrahigh resolution mass spectrometry (FT-ICR-MS) revealed a strong interaction between DOM and reactive sulfur species linked to microbial sulfate reduction, i.e., the key redox process involved in contaminant biodegradation. Excitation emission matrix (EEM) fluorescence spectroscopy in combination with Parallel Factor Analysis (PARAFAC) modeling attributed DOM samples to specific contamination traits. Nuclear magnetic resonance (NMR) spectroscopy evaluated the aromatic compounds and their degradation products in samples influenced by the petroleum contamination and its biodegradation. Our orthogonal high resolution analytical approach enabled a comprehensive molecular level understanding of the DOM with respect to in situ petroleum hydrocarbon biodegradation and microbial sulfate reduction. The role of natural DOM as potential cosubstrate and detoxification reactant may improve future bioremediation strategies.
Pre-concentration technique for reduction in "Analytical instrument requirement and analysis"
NASA Astrophysics Data System (ADS)
Pal, Sangita; Singha, Mousumi; Meena, Sher Singh
2018-04-01
Availability of analytical instruments for a methodical detection of known and unknown effluents imposes a serious hindrance in qualification and quantification. Several analytical instruments such as Elemental analyzer, ICP-MS, ICP-AES, EDXRF, ion chromatography, Electro-analytical instruments which are not only expensive but also time consuming, required maintenance, damaged essential parts replacement which are of serious concern. Move over for field study and instant detection installation of these instruments are not convenient to each and every place. Therefore, technique such as pre-concentration of metal ions especially for lean stream elaborated and justified. Chelation/sequestration is the key of immobilization technique which is simple, user friendly, most effective, least expensive, time efficient; easy to carry (10g - 20g vial) to experimental field/site has been demonstrated.
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Buckley, Thomas N; Adams, Mark A
2011-01-01
Leaf respiration continues in the light but at a reduced rate. This inhibition is highly variable, and the mechanisms are poorly known, partly due to the lack of a formal model that can generate testable hypotheses. We derived an analytical model for non-photorespiratory CO₂ release by solving steady-state supply/demand equations for ATP, NADH and NADPH, coupled to a widely used photosynthesis model. We used this model to evaluate causes for suppression of respiration by light. The model agrees with many observations, including highly variable suppression at saturating light, greater suppression in mature leaves, reduced assimilatory quotient (ratio of net CO₂ and O₂ exchange) concurrent with nitrate reduction and a Kok effect (discrete change in quantum yield at low light). The model predicts engagement of non-phosphorylating pathways at moderate to high light, or concurrent with processes that yield ATP and NADH, such as fatty acid or terpenoid synthesis. Suppression of respiration is governed largely by photosynthetic adenylate balance, although photorespiratory NADH may contribute at sub-saturating light. Key questions include the precise diel variation of anabolism and the ATP : 2e⁻ ratio for photophosphorylation. Our model can focus experimental research and is a step towards a fully process-based model of CO₂ exchange. © 2010 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Djatmika, Rosalina; Ding, Wang-Hsien; Sulistyarti, Hermin
2018-01-01
A rapid determination of four parabens preservatives (methyl paraben, ethyl paraben, propyl paraben, and butyl paraben) in marketed seafood is presented. Analytes were extracted and purified using matrix solid-phase dispersion (MSPD) method, followed by Injection port acylation gas chromatography-mass spectrometry (GC-MS) with acetic anhydride reagent. In this method, acylation of parabens was performed by acetic anhydride at GC injection-port generating reduction of the time-consuming sample-processing steps, and the amount of toxic reagents and solvents. The parameters affecting this method such as injection port temperature, purge-off time and acylation (acetic anhydride) volume were studied. In addition, the MSPD influence factors (including the amount of dispersant and clean-up co-sorbent, as well as the volume of elution solvent) were also investigated. After MSPD method and Injection port acylation applied, good linearity of analytes was achieved. The limits of quantitation (LOQs) were 0.2 to 1.0 ng/g (dry weight). Compared with offline derivatization commonly performed, injection port acylation employs a rapid, simple, low-cost and environmental-friendly derivatization process. The optimized method has been successfully applied for the analysis of parabens in four kind of marketed seafood. Preliminary results showed that the total concentrations of four selected parabens ranged from 16.7 to 44.7 ng/g (dry weight).
Emergency Water Planning for Natural and Man-Made Emergencies: An Analytical Bibliography.
1987-04-01
provide for fast recovery from disaster damages. The most frequently considered elements of such plans include (1) establishing emergency...which utilities may restrict the supply of water available to customers by manipulating the physical system: (1) pressure reduction, (2) intermittent ...shutoff, and (3) complete shutoff. Both pressure reduction and intermittent shutoff pose the risk of damages to the" system resulting from negative
Analytical Method for Determining Tetrazene in Water.
1987-12-01
8217-decanesulfonic acid sodium salt. The mobile phase pH was adjusted to 3 with glacial acetic acid. The modified mobile phase was optimal for separating of...modified with sodium tartrate, gave a well-defined reduction wave at the dropping mercury electrode. The height of the reduction wave was proportional to...anitmony trisulphide, nitrocellulose, PETN, powdered aluminum and calcium silicide . The primer samples were sequentially extracted, first with
Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity
Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.
2010-01-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.
Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L
2010-02-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.
An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application
ERIC Educational Resources Information Center
Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth
2016-01-01
Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…
ERIC Educational Resources Information Center
Follette, William C.; Bonow, Jordan T.
2009-01-01
Whether explicitly acknowledged or not, behavior-analytic principles are at the heart of most, if not all, empirically supported therapies. However, the change process in psychotherapy is only now being rigorously studied. Functional analytic psychotherapy (FAP; Kohlenberg & Tsai, 1991; Tsai et al., 2009) explicitly identifies behavioral-change…
Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J
2001-08-01
The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.
Reuse of process water in a waste-to-energy plant: An Italian case of study.
Gardoni, Davide; Catenacci, Arianna; Antonelli, Manuela
2015-09-01
The minimisation of water consumption in waste-to-energy (WtE) plants is an outstanding issue, especially in those regions where water supply is critical and withdrawals come from municipal waterworks. Among the various possible solutions, the most general, simple and effective one is the reuse of process water. This paper discusses the effectiveness of two different reuse options in an Italian WtE plant, starting from the analytical characterisation and the flow-rate measurement of fresh water and process water flows derived from each utility internal to the WtE plant (e.g. cooling, bottom ash quenching, flue gas wet scrubbing). This census allowed identifying the possible direct connections that optimise the reuse scheme, avoiding additional water treatments. The effluent of the physical-chemical wastewater treatment plant (WWTP), located in the WtE plant, was considered not adequate to be directly reused because of the possible deposition of mineral salts and clogging potential associated to residual suspended solids. Nevertheless, to obtain high reduction in water consumption, reverse osmosis should be installed to remove non-metallic ions (Cl(-), SO4(2-)) and residual organic and inorganic pollutants. Two efficient solutions were identified. The first, a simple reuse scheme based on a cascade configuration, allowed 45% reduction in water consumption (from 1.81 to 0.99m(3)tMSW(-1), MSW: Municipal Solid Waste) without specific water treatments. The second solution, a cascade configuration with a recycle based on a reverse osmosis process, allowed 74% reduction in water consumption (from 1.81 to 0.46m(3)tMSW(-1)). The results of the present work show that it is possible to reduce the water consumption, and in turn the wastewater production, reducing at the same time the operating cost of the WtE plant. Copyright © 2015 Elsevier Ltd. All rights reserved.
Denitrification in Agricultural Soils: Integrated control and Modelling at various scales (DASIM)
NASA Astrophysics Data System (ADS)
Müller, Christoph; Well, Reinhard; Böttcher, Jürgen; Butterbach-Bahl, Klaus; Dannenmann, Michael; Deppe, Marianna; Dittert, Klaus; Dörsch, Peter; Horn, Marcus; Ippisch, Olaf; Mikutta, Robert; Senbayram, Mehmet; Vogel, Hans-Jörg; Wrage-Mönnig, Nicole; Müller, Carsten
2016-04-01
The new research unit DASIM brings together the expertise of 11 working groups to study the process of denitrification at unprecedented spatial and temporal resolution. Based on state-of-the art analytical techniques our aim is to develop improved denitrification models ranging from the microscale to the field/plot scale. Denitrification, the process of nitrate reduction allowing microbes to sustain respiration under anaerobic conditions, is the key process returning reactive nitrogen as N2to the atmosphere. Actively denitrifying communities in soil show distinct regulatory phenotypes (DRP) with characteristic controls on the single reaction steps and end-products. It is unresolved whether DRPs are anchored in the taxonomic composition of denitrifier communities and how environmental conditions shape them. Despite being intensively studied for more than 100 years, denitrification rates and emissions of its gaseous products can still not be satisfactorily predicted. While the impact of single environmental parameters is well understood, the complexity of the process itself with its intricate cellular regulation in response to highly variable factors in the soil matrix prevents robust prediction of gaseous emissions. Key parameters in soil are pO2, organic matter content and quality, pH and the microbial community structure, which in turn are affected by the soil structure, chemistry and soil-plant interactions. In the DASIM research unit, we aim at the quantitative prediction of denitrification rates as a function of microscale soil structure, organic matter quality, DRPs and atmospheric boundary conditions via a combination of state-of-the-art experimental and analytical tools (X-ray μCT, 15N tracing, NanoSIMS, microsensors, advanced flux detection, NMR spectroscopy, and molecular methods including next generation sequencing of functional gene transcripts). We actively seek collaboration with researchers working in the field of denitrification.
Furuki, Kenichiro; Toyo'oka, Toshimasa; Yamaguchi, Hideto
2017-08-15
Mecasermin is used to treat elevated blood sugar as well as growth-hormone-resistant Laron-type dwarfism. Mecasermin isolated from inclusion bodies in extracts of E. coli must be refolded to acquire sufficient activity. However, there is no rapid analytical method for monitoring refolding during the purification process. We prepared mecasermin drug product, in-process samples during the oxidation of mecasermin, forced-reduced mecasermin, and aerially oxidized mecasermin after forced reduction. Desalted mecasermin samples were analyzed using MALDI-ISD. The peak intensity ratio of product to precursor ion was determined. The charge-state distribution (CSD) of mecasermin ions was evaluated using ESI-MS coupled with SEC-mode HPLC. The drift time and collision cross-sectional area (CCS) of mecasermin ions were evaluated using ESI-IMS-MS coupled with SEC-mode HPLC. MALDI-ISD data, CSD values determined using ESI-MS, and the CCS acquired using ESI-IMS-MS revealed the relationship between the folded and unfolded proteoforms of forced-reduced mecasermin and aerially oxidized mecasermin with the free-SH:protein ratio of mecasermin drug product. The CCS area, which is determined using ESI-IMS-MS, provided proteoform information through rapid monitoring (<2 min) of in-process samples during the manufacture of mecasermin. ESI-IMS-MS coupled with SEC-mode HPLC is a rapid and robust method for analyzing the free-SH:protein ratio of mecasermin that allows proteoform changes to be evaluated and monitored during the oxidation of mecasermin. ESI-IMS-MS is applicable as a process analytical technology tool for identifying the "critical quality attributes" and implementing "quality by design" for manufacturing mecasermin. Copyright © 2017 John Wiley & Sons, Ltd.
Żyszka, Beata; Anioł, Mirosław; Lipok, Jacek
2017-08-04
Chalcones are the biogenetic precursors of all known flavonoids, which play an essential role in various metabolic processes in photosynthesizing organisms. The use of whole cyanobacteria cells in a two-step, light-catalysed regioselective bio-reduction of chalcone, leading to the formation of the corresponding dihydrochalcone, is reported. The prokaryotic microalgae cyanobacteria are known to produce phenolic compounds, including flavonoids, as natural components of cells. It seems logical that organisms producing such compounds possess a suitable "enzymatic apparatus" to carry out their biotransformation. Therefore, determination of the ability of whole cells of selected cyanobacteria to carry out biocatalytic transformations of chalcone, the biogenetic precursor of all known flavonoids, was the aim of our study. Chalcone was found to be converted to dihydrochalcone by all examined cyanobacterial strains; however, the effectiveness of this process depends on the strain with biotransformation yields ranging from 3% to >99%. The most effective biocatalysts are Anabaena laxa, Aphanizomenon klebahnii, Nodularia moravica, Synechocystis aquatilis (>99% yield) and Merismopedia glauca (92% yield). The strains Anabaena sp. and Chroococcus minutus transformed chalcone in more than one way, forming a few products; however, dihydrochalcone was the dominant product. The course of biotransformation shed light on the pathway of chalcone conversion, indicating that the process proceeds through the intermediate cis-chalcone. The scaled-up process, conducted on a preparative scale and by using a mini-pilot photobioreactor, fully confirmed the high effectiveness of this bioconversion. Moreover, in the case of the mini-pilot photobioreactor batch cultures, the optimization of culturing conditions allowed the shortening of the process conducted by A. klebahnii by 50% (from 8 to 4 days), maintaining its >99% yield. This is the first report related to the use of whole cells of halophilic and freshwater cyanobacteria strains in a two-step, light-catalysed regioselective bio-reduction of chalcone, leading to the formation of the corresponding dihydrochalcone. The total bioconversion of chalcone in analytical, preparative, and mini-pilot scales of this process creates the possibility of its use in the food industry for the production of natural sweeteners.
Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks
Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire
2009-01-01
This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145
NASA Astrophysics Data System (ADS)
Drieniková, Katarína; Hrdinová, Gabriela; Naňo, Tomáš; Sakál, Peter
2010-01-01
The paper deals with the analysis of the theory of corporate social responsibility, risk management and the exact method of analytic hierarchic process that is used in the decision-making processes. The Chapters 2 and 3 focus on presentation of the experience with the application of the method in formulating the stakeholders' strategic goals within the Corporate Social Responsibility (CSR) and simultaneously its utilization in minimizing the environmental risks. The major benefit of this paper is the application of Analytical Hierarchy Process (AHP).
Analytical, Numerical, and Experimental Results on Turbulent Boundary Layers
1976-07-01
a pitot pressure rake where the spacing between probe centers was 0.5 in. near the wall and 1.0 in. away from the wall. Recently, measurements have...Pressure Gradient, Part II. Analysis- of the Experimental Data." BRL R 1543, June 1971. 51. Allen, J. M. " Pitot -Probe Displacement in a Supersonic Turbulent...numbers; (4) a description of the data reduction of pitot pressure measurements utilizing these analytical results in order to obtain velocity
MERRA/AS: The MERRA Analytic Services Project Interim Report
NASA Technical Reports Server (NTRS)
Schnase, John; Duffy, Dan; Tamkin, Glenn; Nadeau, Denis; Thompson, Hoot; Grieg, Cristina; Luczak, Ed; McInerney, Mark
2013-01-01
MERRA AS is a cyberinfrastructure resource that will combine iRODS-based Climate Data Server (CDS) capabilities with Coudera MapReduce to serve MERRA analytic products, store the MERRA reanalysis data collection in an HDFS to enable parallel, high-performance, storage-side data reductions, manage storage-side driver, mapper, reducer code sets and realized objects for users, and provide a library of commonly used spatiotemporal operations that can be composed to enable higher-order analyses.
NASA Technical Reports Server (NTRS)
Lautenschlager, L.; Perry, C. R., Jr. (Principal Investigator)
1981-01-01
The development of formulae for the reduction of multispectral scanner measurements to a single value (vegetation index) for predicting and assessing vegetative characteristics is addressed. The origin, motivation, and derivation of some four dozen vegetation indices are summarized. Empirical, graphical, and analytical techniques are used to investigate the relationships among the various indices. It is concluded that many vegetative indices are very similar, some being simple algebraic transforms of others.
Overcoming Intuition: Metacognitive Difficulty Activates Analytic Reasoning
ERIC Educational Resources Information Center
Alter, Adam L.; Oppenheimer, Daniel M.; Epley, Nicholas; Eyre, Rebecca N.
2007-01-01
Humans appear to reason using two processing styles: System 1 processes that are quick, intuitive, and effortless and System 2 processes that are slow, analytical, and deliberate that occasionally correct the output of System 1. Four experiments suggest that System 2 processes are activated by metacognitive experiences of difficulty or disfluency…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert G.
This report describes how the intelligent load control (ILC) algorithm can be implemented to achieve peak demand reduction while minimizing impacts on occupant comfort. The algorithm was designed to minimize the additional sensors and minimum configuration requirements to enable a scalable and cost-effective implementation for both large and small-/medium-sized commercial buildings. The ILC algorithm uses an analytic hierarchy process (AHP) to dynamically prioritize the available curtailable loads based on both quantitative (deviation of zone conditions from set point) and qualitative rules (types of zone). Although the ILC algorithm described in this report was highly tailored to work with rooftop units,more » it can be generalized for application to other building loads such as variable-air-volume (VAV) boxes and lighting systems.« less
NASA Technical Reports Server (NTRS)
Wang, Joseph; Escarpa, Alberto; Pumera, Martin; Feldman, Jason; Svehla, D. (Principal Investigator)
2002-01-01
A microfluidic analytical system for the separation and detection of organic peroxides, based on a microchip capillary electrophoresis device with an integrated amperometric detector, was developed. The new microsystem relies on the reductive detection of both organic acid peroxides and hydroperoxides at -700 mV (vs. Ag wire/AgCl). Factors influencing the separation and detection processes were examined and optimized. The integrated microsystem offers rapid measurements (within 130 s) of these organic-peroxide compounds, down to micromolar levels. A highly stable response for repetitive injections (RSD 0.35-3.12%; n = 12) reflects the negligible electrode passivation. Such a "lab-on-a-chip" device should be attractive for on-site analysis of organic peroxides, as desired for environmental screening and industrial monitoring.
Sorption and transport of iodine species in sediments from the Savannah River and Hanford Sites.
Hu, Qinhong; Zhao, Pihong; Moran, Jean E; Seaman, John C
2005-07-01
Iodine is an important element in studies of environmental protection and human health, global-scale hydrologic processes and nuclear nonproliferation. Biogeochemical cycling of iodine is complex, because iodine occurs in multiple oxidation states and as inorganic and organic species that may be hydrophilic, atmophilic, and biophilic. In this study, we applied new analytical techniques to study the sorption and transport behavior of iodine species (iodide, iodate, and 4-iodoaniline) in sediments collected at the Savannah River and Hanford Sites, where anthropogenic (129)I from prior nuclear fuel processing activities poses an environmental risk. We conducted integrated column and batch experiments to investigate the interconversion, sorption and transport of iodine species, and the sediments we examined exhibit a wide range in organic matter, clay mineralogy, soil pH, and texture. The results of our experiments illustrate complex behavior with various processes occurring, including iodate reduction, irreversible retention or mass loss of iodide, and rate-limited and nonlinear sorption. There was an appreciable iodate reduction to iodide, presumably mediated by the structural Fe(II) in some clay minerals; therefore, careful attention must be given to potential interconversion among species when interpreting the biogeochemical behavior of iodine in the environment. The different iodine species exhibited dramatically different sorption and transport behavior in three sediment samples, possessing different physico-chemical properties, collected from different depths at the Savannah River Site. Our study yielded additional insight into processes and mechanisms affecting the geochemical cycling of iodine in the environment, and provided quantitative estimates of key parameters (e.g., extent and rate of sorption) for risk assessment at these sites.
Principal polynomial analysis.
Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus
2014-11-01
This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.
Diwakar, Prasoon K.; Harilal, Sivanandan S.; LaHaye, Nicole L.; Hassanein, Ahmed; Kulkarni, Pramod
2015-01-01
Laser parameters, typically wavelength, pulse width, irradiance, repetition rate, and pulse energy, are critical parameters which influence the laser ablation process and thereby influence the LA-ICP-MS signal. In recent times, femtosecond laser ablation has gained popularity owing to the reduction in fractionation related issues and improved analytical performance which can provide matrix-independent sampling. The advantage offered by fs-LA is due to shorter pulse duration of the laser as compared to the phonon relaxation time and heat diffusion time. Hence the thermal effects are minimized in fs-LA. Recently, fs-LA-ICP-MS demonstrated improved analytical performance as compared to ns-LA-ICP-MS, but detailed mechanisms and processes are still not clearly understood. Improvement of fs-LA-ICP-MS over ns-LA-ICP-MS elucidates the importance of laser pulse duration and related effects on the ablation process. In this study, we have investigated the influence of laser pulse width (40 fs to 0.3 ns) and energy on LA-ICP-MS signal intensity and repeatability using a brass sample. Experiments were performed in single spot ablation mode as well as rastering ablation mode to monitor the Cu/Zn ratio. The recorded ICP-MS signal was correlated with total particle counts generated during laser ablation as well as particle size distribution. Our results show the importance of pulse width effects in the fs regime that becomes more pronounced when moving from femtosecond to picosecond and nanosecond regimes. PMID:26664120
Big Data Analytics for a Smart Green Infrastructure Strategy
NASA Astrophysics Data System (ADS)
Barrile, Vincenzo; Bonfa, Stefano; Bilotta, Giuliana
2017-08-01
As well known, Big Data is a term for data sets so large or complex that traditional data processing applications aren’t sufficient to process them. The term “Big Data” is referred to using predictive analytics. It is often related to user behavior analytics, or other advanced data analytics methods which from data extract value, and rarely to a particular size of data set. This is especially true for the huge amount of Earth Observation data that satellites constantly orbiting the earth daily transmit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrash, Daniel A.; Lalonde, Stefan V.; González-Arismendi, Gabriela
The formation of dolomite in modern peritidal environments is linked to the degradation of buried microbial mats, with complexation of Ca and Mg by extracellular polymeric substances (EPSs) and alkalinity generation through organic carbon respiration facilitating the nucleation of dolomite precursors. In the past two decades, microbial sulfate reduction, methanogenesis, and methanotrophy have all been considered as potential drivers of the nucleation process, but it remains unclear why dolomite formation could not also occur in suboxic sediments where abundant alkalinity is produced by processes linked to Mn(IV) and/or Fe(III) reduction coupled with the diffusion and reoxidation of reduced sulfur species.more » Here we report the interstitial occurrence of spheroidal aggregates of nanometer-scale Ca-rich dolomite rhombohedra within suboxic sediments associated with remnant microbial mats that developed in the peritidal zone of the Archipelago Los Roques, Venezuela. Multiple analytical tools, including EPMA, ICP-MS, synchrotron-based XRF and XRD, and spatially resolved XANES microanalyses, show that the dolomite-cemented interval exhibits depleted bulk iron concentrations, but is interstitially enriched in Mn and elemental sulfur (S⁰). Manganese occurs in several oxidation states, indicating that the dolomite-cemented interval was the locus of complex biological redox transformations characterized by coupled Mn and S cycling. The tight correspondence between sedimentary Mn and MgCO₃ concentrations further hints at a direct role for Mn during dolomitization. While additional studies are required to confirm its relevance in natural settings, we propose a model by which coupled Mn–S redox cycling may promote alkalinity generation and thus dolomite formation in manner similar to, or even more efficiently, than bacterial sulfate reduction alone.« less
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Saratale, Rijuta G; Saratale, Ganesh D; Govindwar, Sanjay P; Kim, Dong S
2015-01-01
Complete decolorization and detoxification of Reactive Orange 4 within 5 h (pH 6.6, at 30°C) by isolated Lysinibacillus sp. RGS was observed. Significant reduction in TOC (93%) and COD (90%) was indicative of conversion of complex dye into simple products, which were identified as naphthalene moieties by various analytical techniques (HPLC, FTIR, and GC-MS). Supplementation of agricultural waste extract considered as better option to make the process cost effective. Oxido-reductive enzymes were found to be involved in the degradation mechanism. Finally Loofa immobilized Lysinibacillus sp. cells in a fixed-bed bioreactor showed significant decolorization with reduction in TOC (51 and 64%) and COD (54 and 66%) for synthetic and textile effluent at 30 and 35 mL h(-1) feeding rate, respectively. The degraded metabolites showed non-toxic nature revealed by phytotoxicity and photosynthetic pigments content study for Sorghum vulgare and Phaseolus mungo. In addition nitrogen fixing and phosphate solubilizing microbes were less affected in treated wastewater and thus the treated effluent can be used for the irrigation purpose. This work could be useful for the development of efficient and ecofriendly technologies to reduce dye content in the wastewater to permissible levels at affordable cost.
ERIC Educational Resources Information Center
Ritter, William A.; Barnard-Brak, Lucy; Richman, David M.; Grubb, Laura M.
2018-01-01
Richman et al. ("J Appl Behav Anal" 48:131-152, 2015) completed a meta-analytic analysis of single-case experimental design data on noncontingent reinforcement (NCR) for the treatment of problem behavior exhibited by individuals with developmental disabilities. Results showed that (1) NCR produced very large effect sizes for reduction in…
Non-traditional applications of laser desorption/ionization mass spectrometry
NASA Astrophysics Data System (ADS)
McAlpin, Casey R.
Seven studies were carried out using laser desorption/ionization mass spectrometry (LDI MS) to develop enhanced methodologies for a variety of analyte systems by investigating analyte chemistries, ionization processes, and elimination of spectral interferences. Applications of LDI and matrix assisted laser/desorption/ionization (MALDI) have been previously limited by poorly understood ionization phenomena, and spectral interferences from matrices. Matrix assisted laser desorption ionization MS is well suited to the analysis of proteins. However, the proteins associated with bacteriophages often form complexes which are too massive for detection with a standard MALDI mass spectrometer. As such, methodologies for pretreatment of these samples are discussed in detail in the first chapter. Pretreatment of bacteriophage samples with reducing agents disrupted disulfide linkages and allowed enhanced detection of bacteriophage proteins. The second chapter focuses on the use of MALDI MS for lipid compounds whose molecular mass is significantly less than the proteins for which MALDI is most often applied. The use of MALDI MS for lipid analysis presented unique challenges such as matrix interference and differential ionization efficiencies. It was observed that optimization of the matrix system, and addition of cationization reagents mitigated these challenges and resulted in an enhanced methodology for MALDI MS of lipids. One of the challenges commonly encountered in efforts to expand MALDI MS applications is as previously mentioned interferences introduced by organic matrix molecules. The third chapter focuses on the development of a novel inorganic matrix replacement system called metal oxide laser ionization mass spectrometry (MOLI MS). In contrast to other matrix replacements, considerable effort was devoted to elucidating the ionization mechanism. It was shown that chemisorption of analytes to the metal oxide surface produced acidic adsorbed species which then protonated free analyte molecules. Expanded applications of MOLI MS were developed following description of the ionization mechanism. A series of experiments were carried out involving treatment of metal oxide surfaces with reagent molecules to expand MOLI MS and develop enhanced MOLI MS methodologies. It was found that treatment of the metal oxide surface with a small molecule to act as a proton source expanded MOLI MS to analytes which did not form acidic adsorbed species. Proton-source pretreated MOLI MS was then used for the analysis of oils obtained from the fast, anoxic pyrolysis of biomass (py-oil). These samples are complex and produce MOLI mass spectra with many peaks. In this experiment, methods of data reduction including Kendrick mass defects and nominal mass z*-scores, which are commonly used for the study of petroleum fractions, were used to interpret these spectra and identify the major constituencies of py-oils. Through data reduction and collision induced dissociation (CID), homologous series of compounds were rapidly identified. The final chapter involves using metal oxides to catalytically cleave the ester linkage on lipids containing fatty acids in addition to ionization. The cleavage process results in the production of spectra similar to those observed with saponification/methylation. Fatty acid profiles were generated for a variety of micro-organisms to differentiate between bacterial species. (Abstract shortened by UMI.)
Incorporating Learning Analytics in the Classroom
ERIC Educational Resources Information Center
Thille, Candace; Zimmaro, Dawn
2017-01-01
This chapter describes an open learning analytics system focused on learning process measures and designed to engage instructors and students in an evidence-informed decision-making process to improve learning.
Largoni, Martina; Facco, Pierantonio; Bernini, Donatella; Bezzo, Fabrizio; Barolo, Massimiliano
2015-10-10
Monitoring batch bioreactors is a complex task, due to the fact that several sources of variability can affect a running batch and impact on the final product quality. Additionally, the product quality itself may not be measurable on line, but requires sampling and lab analysis taking several days to be completed. In this study we show that, by using appropriate process analytical technology tools, the operation of an industrial batch bioreactor used in avian vaccine manufacturing can be effectively monitored as the batch progresses. Multivariate statistical models are built from historical databases of batches already completed, and they are used to enable the real time identification of the variability sources, to reliably predict the final product quality, and to improve process understanding, paving the way to a reduction of final product rejections, as well as to a reduction of the product cycle time. It is also shown that the product quality "builds up" mainly during the first half of a batch, suggesting on the one side that reducing the variability during this period is crucial, and on the other side that the batch length can possibly be shortened. Overall, the study demonstrates that, by using a Quality-by-Design approach centered on the appropriate use of mathematical modeling, quality can indeed be built "by design" into the final product, whereas the role of end-point product testing can progressively reduce its importance in product manufacturing. Copyright © 2015 Elsevier B.V. All rights reserved.
Chen, Ligang; Zeng, Qinglei; Du, Xiaobo; Sun, Xin; Zhang, Xiaopan; Xu, Yang; Yu, Aimin; Zhang, Hanqi; Ding, Lan
2009-11-01
In this work, a new method was developed for the determination of melamine (MEL) in animal feed. The method was based on the on-line coupling of dynamic microwave-assisted extraction (DMAE) to strong cation-exchange (SCX) resin clean-up. The MEL was first extracted by 90% acidified methanol aqueous solution (v/v, pH = 3) under the action of microwave energy, and then the extract was cooled and passed through the SCX resin. Thus, the protonated MEL was retained on the resin through ion exchange interaction and the sample matrixes were washed out. Some obvious benefits were achieved, such as acceleration of analytical process, together with reduction in manual handling, risk of contamination, loss of analyte, and sample consumption. Finally, the analyte was separated by a liquid chromatograph with a SCX analytical column, and then identified and quantitatived by a tandem mass spectrometry with positive ionization mode and multiple-reaction monitoring. The DMAE parameters were optimized by the Box-Behnken design. The linearity of quantification obtained by analyzing matrix-matched standards is in the range of 50-5,000 ng g(-1). The limit of detection and limit of quantification obtained are 12.3 and 41.0 ng g(-1), respectively. The mean intra- and inter-day precisions expressed as relative standard deviations with three fortified levels (50, 250, and 500 ng g(-1)) are 5.1% and 7.3%, respectively, and the recoveries of MEL are in the range of 76.1-93.5%. The proposed method was successfully applied to determine MEL in different animal feeds obtained from the local market. MEL was detectable with the contents of 279, 136, and 742 ng g(-1) in three samples.
Kong, Qingkun; Wang, Yanhu; Zhang, Lina; Xu, Caixia; Yu, Jinghua
2018-07-01
A microfluidic paper-based analytical device (μPAD) was simply constructed for highly sensitive detection of L-glutamic acid and L-cysteine. The μPAD featured with two functional zones on one strip of paper achieved by preferable multi-plate ZnO nanoflowers (ZnO NFs) and molecularly imprinting polymer (MIP) membranes. The as-designed μPAD was established based on the inherent relation between the photo-oxidation products and photoelectrochemical (PEC) performance with the highly sensitive detection of biomolecules. The ZnO NFs were utilized to produce photo-oxidation products by driving the reaction between ferrocenemethanol and photogenerated holes under ultraviolet light. The photo-oxidation products easily flowed to MIP membranes along the hydrophilic channel via capillary action. MIP membranes as the receptors specifically recognized the analytes as well as decreased the electron loss by blocking the reduction reaction between electrons and photo-oxidation products. The PEC response was obtained in the processes of electrons transfer and exhibited the direct relationships corresponding to the concentrations of target analytes. The μPAD showed the detection limits toward L-glutamic acid and L-cysteine as low as 9.6 pM and 24 pM, respectively. Moreover, it is interesting to point out that ZnO NFs nanostructure shows superior PEC signal compared with those of ZnO nanospheres, nanosheets, and nanorod arrays. In current work, photo-oxidation products are utilized to achieve highly sensitive PEC detection for biomolecules under ultraviolet light as well as avoid the effects of multiple modifications in the same region on the reproducibility, which is beneficial for opening up rich possibility for designing more efficient analytical strategy. Copyright © 2018 Elsevier B.V. All rights reserved.
Massin, Sophie
2012-06-01
This article aims to help resolve the apparent paradox of producers of addictive goods who claim to be socially responsible while marketing a product clearly identified as harmful. It advances that reputation effects are crucial in this issue and that determining whether harm reduction practices are costly or profitable for the producers can help to assess the sincerity of their discourse. An analytical framework based on an epidemic model of addictive consumption that includes a deterrent effect of heavy use on initiation is developed. This framework enables us to establish a clear distinction between a simple responsible discourse and genuine harm reduction practices and, among harm reduction practices, between use reduction practices and micro harm reduction practices. Using simulations based on tobacco sales in France from 1950 to 2008, we explore the impact of three corresponding types of actions: communication on damage, restraining selling practices and development of safer products on total sales and on the social cost. We notably find that restraining selling practices toward light users, that is, preventing light users from escalating to heavy use, can be profitable for the producer, especially at early stages of the epidemic, but that such practices also contribute to increase the social cost. These results suggest that the existence of a deterrent effect of heavy use on the initiation of the consumption of an addictive good can shed new light on important issues, such as the motivations for corporate social responsibility and the definition of responsible actions in the particular case of harm reduction. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ross, Robert M; Pennycook, Gordon; McKay, Ryan; Gervais, Will M; Langdon, Robyn; Coltheart, Max
2016-07-01
It has been proposed that deluded and delusion-prone individuals gather less evidence before forming beliefs than those who are not deluded or delusion-prone. The primary source of evidence for this "jumping to conclusions" (JTC) bias is provided by research that utilises the "beads task" data-gathering paradigm. However, the cognitive mechanisms subserving data gathering in this task are poorly understood. In the largest published beads task study to date (n = 558), we examined data gathering in the context of influential dual-process theories of reasoning. Analytic cognitive style (the willingness or disposition to critically evaluate outputs from intuitive processing and engage in effortful analytic processing) predicted data gathering in a non-clinical sample, but delusional ideation did not. The relationship between data gathering and analytic cognitive style suggests that dual-process theories of reasoning can contribute to our understanding of the beads task. It is not clear why delusional ideation was not found to be associated with data gathering or analytic cognitive style.
Bak, Jin Seop
2015-01-01
In order to address the limitations associated with the inefficient pasteurization platform used to make Makgeolli, such as the presence of turbid colloidal dispersions in suspension, commercially available Makgeolli was minimally processed using a low-pressure homogenization-based pasteurization (LHBP) process. This continuous process demonstrates that promptly reducing the exposure time to excessive heat using either large molecules or insoluble particles can dramatically improve internal quality and decrease irreversible damage. Specifically, optimal homogenization increased concomitantly with physical parameters such as colloidal stability (65.0% of maximum and below 25-μm particles) following two repetitions at 25.0 MPa. However, biochemical parameters such as microbial population, acidity, and the presence of fermentable sugars rarely affected Makgeolli quality. Remarkably, there was a 4.5-log reduction in the number of Saccharomyces cerevisiae target cells at 53.5°C for 70 sec in optimally homogenized Makgeolli. This value was higher than the 37.7% measured from traditionally pasteurized Makgeolli. In contrast to the analytical similarity among homogenized Makgeollis, our objective quality evaluation demonstrated significant differences between pasteurized (or unpasteurized) Makgeolli and LHBP-treated Makgeolli. Low-pressure homogenization-based pasteurization, Makgeolli, minimal processing-preservation, Saccharomyces cerevisiae, suspension stability.
Models of formation and some algorithms of hyperspectral image processing
NASA Astrophysics Data System (ADS)
Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.
2014-12-01
Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Climate Analytics as a Service. Chapter 11
NASA Technical Reports Server (NTRS)
Schnase, John L.
2016-01-01
Exascale computing, big data, and cloud computing are driving the evolution of large-scale information systems toward a model of data-proximal analysis. In response, we are developing a concept of climate analytics as a service (CAaaS) that represents a convergence of data analytics and archive management. With this approach, high-performance compute-storage implemented as an analytic system is part of a dynamic archive comprising both static and computationally realized objects. It is a system whose capabilities are framed as behaviors over a static data collection, but where queries cause results to be created, not found and retrieved. Those results can be the product of a complex analysis, but, importantly, they also can be tailored responses to the simplest of requests. NASA's MERRA Analytic Service and associated Climate Data Services API provide a real-world example of climate analytics delivered as a service in this way. Our experiences reveal several advantages to this approach, not the least of which is orders-of-magnitude time reduction in the data assembly task common to many scientific workflows.
100-B/C Target Analyte List Development for Soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.W. Ovink
2010-03-18
This report documents the process used to identify source area target analytes in support of the 100-B/C remedial investigation/feasibility study addendum to DOE/RL-2008-46. This report also establishes the analyte exclusion criteria applicable for 100-B/C use and the analytical methods needed to analyze the target analytes.
Understanding Business Analytics
2015-01-05
analytics have been used in organizations for a variety of reasons for quite some time; ranging from the simple (generating and understanding business analytics...process. understanding business analytics 3 How well these two components are orchestrated will determine the level of success an organization has in
Perspectives on bioanalytical mass spectrometry and automation in drug discovery.
Janiszewski, John S; Liston, Theodore E; Cole, Mark J
2008-11-01
The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.
The role of proteomics in studies of protein moonlighting.
Beynon, Robert J; Hammond, Dean; Harman, Victoria; Woolerton, Yvonne
2014-12-01
The increasing acceptance that proteins may exert multiple functions in the cell brings with it new analytical challenges that will have an impact on the field of proteomics. Many proteomics workflows begin by destroying information about the interactions between different proteins, and the reduction of a complex protein mixture to constituent peptides also scrambles information about the combinatorial potential of post-translational modifications. To bring the focus of proteomics on to the domain of protein moonlighting will require novel analytical and quantitative approaches.
Velocity Spread Reduction for Axis-encircling Electron Beam Generated by Single Magnetic Cusp
NASA Astrophysics Data System (ADS)
Jeon, S. G.; Baik, C. W.; Kim, D. H.; Park, G. S.; Sato, N.; Yokoo, K.
2001-10-01
Physical characteristics of an annular Pierce-type electron gun are investigated analytically. An annular electron gun is used in conjunction with a non-adiabatic magnetic reversal and an adiabatic compression to produce an axis-encircling electron beam. Velocity spread close to zero is realized with an initial canonical angular momentum spread at the cathode when the beam trajectory does not coincide with the magnetic flux line. Both the analytical calculation and the EGUN code simulation confirm this phenomenon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, M.D.
Analytical Chemistry of PCBs offers a review of physical, chemical, commercial, environmental and biological properties of PCBs. It also defines and discusses six discrete steps of analysis: sampling, extraction, cleanup, determination, data reduction, and quality assurance. The final chapter provides a discussion on collaborative testing - the ultimate step in method evaluation. Dr. Erickson also provides a bibliography of over 1200 references, critical reviews of primary literature, and five appendices which present ancillary material on PCB nomen-clature, physical properties, composition of commercial mixtures, mass spectra characteristics, and PGC/ECD chromatograms.
Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P
2013-05-05
Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.
In conflict with ourselves? An investigation of heuristic and analytic processes in decision making.
Bonner, Carissa; Newell, Ben R
2010-03-01
Many theorists propose two types of processing: heuristic and analytic. In conflict tasks, in which these processing types lead to opposing responses, giving the analytic response may require both detection and resolution of the conflict. The ratio bias task, in which people tend to treat larger numbered ratios (e.g., 20/100) as indicating a higher likelihood of winning than do equivalent smaller numbered ratios (e.g., 2/10), is considered to induce such a conflict. Experiment 1 showed response time differences associated with conflict detection, resolution, and the amount of conflict induced. The conflict detection and resolution effects were replicated in Experiment 2 and were not affected by decreasing the influence of the heuristic response or decreasing the capacity to make the analytic response. The results are consistent with dual-process accounts, but a single-process account in which quantitative, rather than qualitative, differences in processing are assumed fares equally well in explaining the data.
Equivalent reduced model technique development for nonlinear system dynamic response
NASA Astrophysics Data System (ADS)
Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2013-04-01
The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.
Toske, Steven G; McConnell, Jennifer B; Brown, Jaclyn L; Tuten, Jennifer M; Miller, Erin E; Phillips, Monica Z; Vazquez, Etienne R; Lurie, Ira S; Hays, Patrick A; Guest, Elizabeth M
2017-03-01
A trace processing impurity found in certain methamphetamine exhibits was isolated and identified as trans-N-methyl-4-methyl-5-phenyl-4-penten-2-amine hydrochloride (1). It was determined that this impurity was produced via reductive amination of trans-4-methyl-5-phenyl-4-penten-2-one (4), which was one of a cluster of related ketones generated during the synthesis of 1-phenyl-2-propanone (P2P) from phenylacetic acid and lead (II) acetate. This two-step sequence resulted in methamphetamine containing elevated levels of 1. In contrast, methamphetamine produced from P2P made by other methods produced insignificant (ultra-trace or undetectable) amounts of 1. These results confirm that 1 is a synthetic marker compound for the phenylacetic acid and lead (II) acetate method. Analytical data for 1 and 4, and a postulated mechanism for the production of 4, are presented. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Amplified biosensing using the horseradish peroxidase-mimicking DNAzyme as an electrocatalyst.
Pelossof, Gilad; Tel-Vered, Ran; Elbaz, Johann; Willner, Itamar
2010-06-01
The hemin/G-quadruplex horseradish peroxidase-mimicking DNAzyme is assembled on Au electrodes. It reveals bioelectrocatalytic properties and electrocatalyzes the reduction of H(2)O(2). The bioelectrocatalytic functions of the hemin/G-quadruplex DNAzyme are used to develop electrochemical sensors that follow the activity of glucose oxidase and biosensors for the detection of DNA or low-molecular-weight substrates (adenosine monophosphate, AMP). Hairpin nucleic structures that include the G-quadruplex sequence in a caged configuration and the nucleic acid sequence complementary to the analyte DNA, or the aptamer sequence for AMP, are immobilized on Au-electrode surfaces. In the presence of the DNA analyte, or AMP, the hairpin structures are opened, and the hemin/G-quadruplex horseradish peroxidase-mimicking DNAzyme structures are generated on the electrode surfaces. The bioelectrocatalytic cathodic currents generated by the functionalized electrodes, upon the electrochemical reduction of H(2)O(2), provide a quantitative measure for the detection of the target analytes. The DNA target was analyzed with a detection limit of 1 x 10(-12) M, while the detection limit for analyzing AMP was 1 x 10(-6) M. Methods to regenerate the sensing surfaces are presented.
NASA Astrophysics Data System (ADS)
Wang, Y. B.; Zhu, X. W.; Dai, H. H.
2016-08-01
Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.
Role of microextraction sampling procedures in forensic toxicology.
Barroso, Mário; Moreno, Ivo; da Fonseca, Beatriz; Queiroz, João António; Gallardo, Eugenia
2012-07-01
The last two decades have provided analysts with more sensitive technology, enabling scientists from all analytical fields to see what they were not able to see just a few years ago. This increased sensitivity has allowed drug detection at very low concentrations and testing in unconventional samples (e.g., hair, oral fluid and sweat), where despite having low analyte concentrations has also led to a reduction in sample size. Along with this reduction, and as a result of the use of excessive amounts of potentially toxic organic solvents (with the subsequent environmental pollution and costs associated with their proper disposal), there has been a growing tendency to use miniaturized sampling techniques. Those sampling procedures allow reducing organic solvent consumption to a minimum and at the same time provide a rapid, simple and cost-effective approach. In addition, it is possible to get at least some degree of automation when using these techniques, which will enhance sample throughput. Those miniaturized sample preparation techniques may be roughly categorized in solid-phase and liquid-phase microextraction, depending on the nature of the analyte. This paper reviews recently published literature on the use of microextraction sampling procedures, with a special focus on the field of forensic toxicology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peintler-Krivan, Emese; Van Berkel, Gary J; Kertesz, Vilmos
2010-01-01
An emitter electrode with an electroactive poly(pyrrole) (PPy) polymer film coating was constructed for use in electrospray ionization mass spectrometry (ESI-MS). The PPy film acted as a surface-attached redox buffer limiting the interfacial potential of the emitter electrode. While extensive oxidation of selected analytes (reserpine and amodiaquine) was observed in positive ion mode ESI using a bare metal (gold) emitter electrode, the oxidation was suppressed for these same analytes when using the PPy-coated electrode. A semi-quantitative relationship between the rate of oxidation observed and the interfacial potential of the emitter electrode was shown. The redox buffer capacity, and therefore themore » lifetime of the redox buffering effect, correlated with the oxidation potential of the analyte and with the magnitude of the film charge capacity. Online reduction of the PPy polymer layer using negative ion mode ESI between analyte injections was shown to successfully restore the redox buffering capacity of the polymer film to its initial state.« less
Fischer, David J.; Hulvey, Matthew K.; Regel, Anne R.; Lunte, Susan M.
2012-01-01
The fabrication and evaluation of different electrode materials and electrode alignments for microchip electrophoresis with electrochemical (EC) detection is described. The influences of electrode material, both metal and carbon-based, on sensitivity and limits of detection (LOD) were examined. In addition, the effects of working electrode alignment on analytical performance (in terms of peak shape, resolution, sensitivity, and LOD) were directly compared. Using dopamine (DA), norepinephrine (NE), and catechol (CAT) as test analytes, it was found that pyrolyzed photoresist electrodes with end-channel alignment yielded the lowest limit of detection (35 nM for DA). In addition to being easier to implement, end-channel alignment also offered better analytical performance than off-channel alignment for the detection of all three analytes. In-channel electrode alignment resulted in a 3.6-fold reduction in peak skew and reduced peak tailing by a factor of 2.1 for catechol in comparison to end-channel alignment. PMID:19802847
Temporal processing of speech in a time-feature space
NASA Astrophysics Data System (ADS)
Avendano, Carlos
1997-09-01
The performance of speech communication systems often degrades under realistic environmental conditions. Adverse environmental factors include additive noise sources, room reverberation, and transmission channel distortions. This work studies the processing of speech in the temporal-feature or modulation spectrum domain, aiming for alleviation of the effects of such disturbances. Speech reflects the geometry of the vocal organs, and the linguistically dominant component is in the shape of the vocal tract. At any given point in time, the shape of the vocal tract is reflected in the short-time spectral envelope of the speech signal. The rate of change of the vocal tract shape appears to be important for the identification of linguistic components. This rate of change, or the rate of change of the short-time spectral envelope can be described by the modulation spectrum, i.e. the spectrum of the time trajectories described by the short-time spectral envelope. For a wide range of frequency bands, the modulation spectrum of speech exhibits a maximum at about 4 Hz, the average syllabic rate. Disturbances often have modulation frequency components outside the speech range, and could in principle be attenuated without significantly affecting the range with relevant linguistic information. Early efforts for exploiting the modulation spectrum domain (temporal processing), such as the dynamic cepstrum or the RASTA processing, used ad hoc designed processing and appear to be suboptimal. As a major contribution, in this dissertation we aim for a systematic data-driven design of temporal processing. First we analytically derive and discuss some properties and merits of temporal processing for speech signals. We attempt to formalize the concept and provide a theoretical background which has been lacking in the field. In the experimental part we apply temporal processing to a number of problems including adaptive noise reduction in cellular telephone environments, reduction of reverberation for speech enhancement, and improvements on automatic recognition of speech degraded by linear distortions and reverberation.
Enhanced spot preparation for liquid extractive sampling and analysis
Van Berkel, Gary J.; King, Richard C.
2015-09-22
A method for performing surface sampling of an analyte, includes the step of placing the analyte on a stage with a material in molar excess to the analyte, such that analyte-analyte interactions are prevented and the analyte can be solubilized for further analysis. The material can be a matrix material that is mixed with the analyte. The material can be provided on a sample support. The analyte can then be contacted with a solvent to extract the analyte for further processing, such as by electrospray mass spectrometry.
Internal quality control: best practice.
Kinns, Helen; Pitkin, Sarah; Housley, David; Freedman, Danielle B
2013-12-01
There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.
2015-04-01
Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.
Capillary Flow in an Interior Corner
NASA Technical Reports Server (NTRS)
Weislogel, Mark Milton
1996-01-01
The design of fluids management processes in the low-gravity environment of space requires an accurate model and description of capillarity-controlled flow in containers of irregular geometry. Here we consider the capillary rise of a fluid along an interior corner of a container following a rapid reduction in gravity. The analytical portion of the work presents an asymptotic formulation in the limit of a slender fluid column, slight surface curvature along the corner, small inertia, and low gravity. New similarity solutions are found and a list of closed form expressions is provided for flow rate and column length. In particular, it is found that the flow is proportional to t(exp 1/2) for a constant height boundary condition, t(exp 2/5) for a spreading drop, and t(exp 3/5) for constant flow. In the experimental portion of the work, measurements from a 2.2s drop tower are reported. An extensive data set, collected over a previously unexplored range of flow parameters, includes estimates of repeatability and accuracy, the role of inertia and column slenderness, and the effects of corner angle, container geometry, and fluid properties. Comprehensive comparisons are made which illustrate the applicability of the analytic results to low-g fluid systems design.
Analysis of E-marketplace Attributes: Assessing The NATO Logistics Stock Exchange
2008-01-01
order processing time Reduction of stock levels Reduction of payment processing time Reduction of excessive stocks Reduction of maverick buying...satisfaction 4,02 0,151 3. Reduction of order processing time 4,27 0,317 15. Reduction of stock levels 3,87 0,484 4. Reduction of payment processing time...information exchange with partners in the supply chain Efficiency Basic Reduction of order processing time Efficiency Important Reduction of
REUSABLE PROPULSION ARCHITECTURE FOR SUSTAINABLE LOW-COST ACCESS TO SPACE
NASA Technical Reports Server (NTRS)
Bonometti, Joseph; Frame, Kyle L.; Dankanich, John W.
2005-01-01
Two transportation architecture changes are presented at either end of a conventional two-stage rocket flight: 1) Air launch using a large, conventional, pod hauler design (i.e., Crossbow)ans 2) Momentum exchange tether (i.e., an in-space asset like MXER). Air launch has ana analytically justified cost reduction of approx. 10%, but its intangible benefits suggest real-world operations cost reductions much higher: 1) Inherent launch safety; 2) Mission Risk Reduction; 3) Favorable payload/rocket limitations; and 4) Leveraging the aircraft for other uses (military transport, commercial cargo, public outreach activities, etc.)
Hydroplaning on multi lane facilities.
DOT National Transportation Integrated Search
2012-11-01
The primary findings of this research can be highlighted as follows. Models that provide estimates of wet weather speed reduction, as well as analytical and empirical methods for the prediction of hydroplaning speeds of trailers and heavy trucks, wer...
NASA Technical Reports Server (NTRS)
Taylor, R. B.; Zwicke, P. E.; Gold, P.; Miao, W.
1980-01-01
An analytical study was conducted to define the basic configuration of an active control system for helicopter vibration and gust response alleviation. The study culminated in a control system design which has two separate systems: narrow band loop for vibration reduction and wider band loop for gust response alleviation. The narrow band vibration loop utilizes the standard swashplate control configuration to input controller for the vibration loop is based on adaptive optimal control theory and is designed to adapt to any flight condition including maneuvers and transients. The prime characteristics of the vibration control system is its real time capability. The gust alleviation control system studied consists of optimal sampled data feedback gains together with an optimal one-step-ahead prediction. The prediction permits the estimation of the gust disturbance which can then be used to minimize the gust effects on the helicopter.
NASA Astrophysics Data System (ADS)
Chen, Dong; Sun, Dihua; Zhao, Min; Zhou, Tong; Cheng, Senlin
2018-07-01
In fact, driving process is a typical cyber physical process which couples tightly the cyber factor of traffic information with the physical components of the vehicles. Meanwhile, the drivers have situation awareness in driving process, which is not only ascribed to the current traffic states, but also extrapolates the changing trend. In this paper, an extended car-following model is proposed to account for drivers' situation awareness. The stability criterion of the proposed model is derived via linear stability analysis. The results show that the stable region of proposed model will be enlarged on the phase diagram compared with previous models. By employing the reductive perturbation method, the modified Korteweg de Vries (mKdV) equation is obtained. The kink-antikink soliton of mKdV equation reveals theoretically the evolution of traffic jams. Numerical simulations are conducted to verify the analytical results. Two typical traffic Scenarios are investigated. The simulation results demonstrate that drivers' situation awareness plays a key role in traffic flow oscillations and the congestion transition.
NASA Astrophysics Data System (ADS)
Behrens, B.-A.; Bouguecha, A.; Vucetic, M.; Grbic, N.
2016-10-01
A structural concept in multi-material design is used in the automotive industry with the aim of achieving significant weight reductions of conventional car bodies. In this respect, the use of aluminum and short fiber reinforced plastics represents an interesting material combination. A wide acceptance of such a material combination requires a suitable joining technique. Among different joining techniques, clinching represents one of the most appealing alternative for automotive applications. This contribution deals with the FE simulation of the clinching process of two representative materials PA6GF30 and EN AW 5754 using the FE software LS-DYNA. With regard to the material modelling of the aluminum sheet, an isotropic material model based on the von Mises plasticity implemented in LS-DYNA was chosen. Analogous to aluminum, the same material model is used for modelling the short fiber reinforced thermoplastic. Additionally, a semi-analytical model for polymers (SAMP-1) also available in LS-DYNA was taken. Finally, the FEA of clinching process is carried out and the comparison of the simulation results is presented above.
Combined mechanical loading of composite tubes
NASA Technical Reports Server (NTRS)
Derstine, Mark S.; Pindera, Marek-Jerzy; Bowles, David E.
1988-01-01
An analytical/experimental investigation was performed to study the effect of material nonlinearities on the response of composite tubes subjected to combined axial and torsional loading. The effect of residual stresses on subsequent mechanical response was included in the investigation. Experiments were performed on P75/934 graphite-epoxy tubes with a stacking sequence of (15/0/ + or - 10/0/ -15), using pure torsion and combined axial/torsional loading. In the presence of residual stresses, the analytical model predicted a reduction in the initial shear modulus. Experimentally, coupling between axial loading and shear strain was observed in laminated tubes under combined loading. The phenomenon was predicted by the nonlinear analytical model. The experimentally observed linear limit of the global shear response was found to correspond to the analytically predicted first ply failure. Further, the failure of the tubes was found to be path dependent above a critical load level.
NASA Astrophysics Data System (ADS)
Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza
2018-06-01
Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.
Achieving Cost Reduction Through Data Analytics.
Rocchio, Betty Jo
2016-10-01
The reimbursement structure of the US health care system is shifting from a volume-based system to a value-based system. Adopting a comprehensive data analytics platform has become important to health care facilities, in part to navigate this shift. Hospitals generate plenty of data, but actionable analytics are necessary to help personnel interpret and apply data to improve practice. Perioperative services is an important revenue-generating department for hospitals, and each perioperative service line requires a tailored approach to be successful in managing outcomes and controlling costs. Perioperative leaders need to prepare to use data analytics to reduce variation in supplies, labor, and overhead. Mercy, based in Chesterfield, Missouri, adopted a perioperative dashboard that helped perioperative leaders collaborate with surgeons and perioperative staff members to organize and analyze health care data, which ultimately resulted in significant cost savings. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
NASA Astrophysics Data System (ADS)
Blanks, J. K.; Hintz, C. J.; Chandler, G. T.; Shaw, T. J.; McCorkle, D. C.; Bernhard, J. M.
2007-12-01
Mg/Ca and Sr/Ca were analyzed from core-top individual Hoeglundina elegans aragonitic tests collected from three continental slope depths within the South Carolina and Little Bahama Bank continental slope environs (220 m to 1084 m). Our study utilized only individuals that labeled with the vital probe CellTracker Green - unlike bulk core-top material often stained with Rose Bengal, which has known inconsistencies in distinguishing live from dead foraminifera. DSr x 10 values were consistently 1.74 $ pm 0.23 across all sampling depths. The analytical error in DSr values (0.7%) determined by ICP-MS between repeated measurements on individual H. elegans tests across all depths was less than analytical error on repeated measurements from standards. Variation in DSr values was not directly explained by a linear temperature relationship (p=0.0003, R2=0.44) over the temperature range of 4.9-11.4°C with a sensitivity of 59.8 μmol/mol/1°C. The standard error by regressing DSr across temperature yields + 3.4°C, which is nearly 3x greater that reported in previous studies. Sr/Ca was more sensitive for calibrating temperature than Mg/Ca in H. elegans. Observed scatter in DSr was too great across individuals of the same size and of different sizes to resolve ontogenetic effects. However, higher DSr values were associated with smaller individuals and warmer/shallower sampling depths. The highest DSr values were observed at the intermediate sampling depth (~600 m). No significant ontogenetic relationship was found across DSr values in different sized individuals due to tighter overall constrained variance; however lower DSr values were observed from several smaller individuals. Several dead tests of H. elegans showed no significant differences in DSr values compared to live specimens cleaned by standard cleaning methods, unlike higher dead than live DMg values observed for the same individuals. There were no significant deviations in DSr across batches cleaned on separate days, unlike the observed sensitivity of DMg across batches. A subset of samples were reductively cleaned (hydrazine solution); and exhibited DMg values within analytical precision of those observed for non-reductively cleaned samples. Therefore, deviations in DMg values resulting from the removal of the reductive cleaning step did not explain analytical errors greater than published values for Mg/Ca or the high variance across same sized individuals. Variation in DMg values across the same cleaning methods and from dead individuals suggests the need for a careful look into how foraminiferal aragonite should be processed. These findings provide evidence that both Mg and Sr in benthic foraminiferal aragonite reflect factors in addition to temperature and pressure that may interfere with absolute temperature calibrations. Funded by NSF OCE 0351029, OCE 0437366, and OCE-0350794.
Analytic Steering: Inserting Context into the Information Dialog
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohn, Shawn J.; Calapristi, Augustin J.; Brown, Shyretha D.
2011-10-23
An analyst’s intrinsic domain knowledge is a primary asset in almost any analysis task. Unstructured text analysis systems that apply un-supervised content analysis approaches can be more effective if they can leverage this domain knowledge in a manner that augments the information discovery process without obfuscating new or unexpected content. Current unsupervised approaches rely upon the prowess of the analyst to submit the right queries or observe generalized document and term relationships from ranked or visual results. We propose a new approach which allows the user to control or steer the analytic view within the unsupervised space. This process ismore » controlled through the data characterization process via user supplied context in the form of a collection of key terms. We show that steering with an appropriate choice of key terms can provide better relevance to the analytic domain and still enable the analyst to uncover un-expected relationships; this paper discusses cases where various analytic steering approaches can provide enhanced analysis results and cases where analytic steering can have a negative impact on the analysis process.« less
Tiopronin Gold Nanoparticle Precursor Forms Aurophilic Ring Tetramer
Simpson, Carrie A.; Farrow, Christopher L.; Tian, Peng; Billinge, Simon J.L.; Huffman, Brian J.; Harkness, Kellen M.; Cliffel, David E.
2010-01-01
In the two step synthesis of thiolate-monolayer protected clusters (MPCs), the first step of the reaction is a mild reduction of gold(III) by thiols that generates gold(I) thiolate complexes as intermediates. Using tiopronin (Tio) as the thiol reductant, the characterization of the intermediate Au4Tio4 complex was accomplished with various analytical and structural techniques. Nuclear magnetic resonance (NMR), elemental analysis, thermogravimetric analysis (TGA), and matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) were all consistent with a cyclic gold(I)-thiol tetramer structure, and final structural analysis was gathered through the use of powder diffraction and pair distribution functions (PDF). Crystallographic data has proved challenging for almost all previous gold(I)-thiolate complexes. Herein, a novel characterization technique when combined with standard analytical assessment to elucidate structure without crystallographic data proved invaluable to the study of these complexes. This in conjunction with other analytical techniques, in particular mass spectrometry, can elucidate a structure when crystallographic data is unavailable. In addition, luminescent properties provided evidence of aurophilicity within the molecule. The concept of aurophilicity has been introduced to describe a select group of gold-thiolate structures, which possess unique characteristics, mainly red photoluminescence and a distinct Au-Au intramolecular distance indicating a weak metal-metal bond as also evidenced by the structural model of the tetramer. Significant features of both the tetrameric and aurophilic properties of the intermediate gold(I) tiopronin complex are retained after borohydride reduction to form the MPC, including gold(I) tiopronin partial rings as capping motifs, or “staples”, and weak red photoluminescence that extends into the Near Infrared region. PMID:21067183
Clustering in analytical chemistry.
Drab, Klaudia; Daszykowski, Michal
2014-01-01
Data clustering plays an important role in the exploratory analysis of analytical data, and the use of clustering methods has been acknowledged in different fields of science. In this paper, principles of data clustering are presented with a direct focus on clustering of analytical data. The role of the clustering process in the analytical workflow is underlined, and its potential impact on the analytical workflow is emphasized.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
Ecotoxicological evaluation of diesel-contaminated soil before and after a bioremediation process.
Molina-Barahona, L; Vega-Loyo, L; Guerrero, M; Ramírez, S; Romero, I; Vega-Jarquín, C; Albores, A
2005-02-01
Evaluation of contaminated sites is usually performed by chemical analysis of pollutants in soil. This is not enough either to evaluate the environmental risk of contaminated soil nor to evaluate the efficiency of soil cleanup techniques. Information on the bioavailability of complex mixtures of xenobiotics and degradation products cannot be totally provided by chemical analytical data, but results from bioassays can integrate the effects of pollutants in complex mixtures. In the preservation of human health and environment quality, it is important to assess the ecotoxicological effects of contaminated soils to obtain a better evaluation of the healthiness of this system. The monitoring of a diesel-contaminated soil and the evaluation of a bioremediation technique conducted on a microcosm scale were performed by a battery of ecotoxicological tests including phytotoxicity, Daphnia magna, and nematode assays. In this study we biostimulated the native microflora of soil contaminated with diesel by adding nutrients and crop residue (corn straw) as a bulking agent and as a source of microorganisms and nutrients; in addition, moisture was adjusted to enhance diesel removal. The bioremediation process efficiency was evaluated directly by an innovative, simple phytotoxicity test system and the diesel extracts by Daphnia magna and nematode assays. Contaminated soil samples were revealed to have toxic effects on seed germination, seedling growth, and Daphnia survival. After biostimulation, the diesel concentration was reduced by 50.6%, and the soil samples showed a significant reduction in phytotoxicity (9%-15%) and Daphnia assays (3-fold), confirming the effectiveness of the bioremediation process. Results from our microcosm study suggest that in addition to the evaluation of the bioremediation processes efficiency, toxicity testing is different with organisms representative of diverse phylogenic levels. The integration of analytical, toxicological and bioremediation data is necessary to properly assess the ecological risk of bioremediation processes. (c) 2005 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Ziv, Yair
2015-01-01
We first demonstrated analytic processing in ASD under conditions in which integral processing seems mandatory in TD observers, a pattern that is often taken to indicate a local default processing in ASD. However, this processing bias does not inevitably come at the price of impaired integration skills. Indeed, examining the same group of…
Predictive Analytics to Support Real-Time Management in Pathology Facilities.
Lessard, Lysanne; Michalowski, Wojtek; Chen Li, Wei; Amyot, Daniel; Halwani, Fawaz; Banerjee, Diponkar
2016-01-01
Predictive analytics can provide valuable support to the effective management of pathology facilities. The introduction of new tests and technologies in anatomical pathology will increase the volume of specimens to be processed, as well as the complexity of pathology processes. In order for predictive analytics to address managerial challenges associated with the volume and complexity increases, it is important to pinpoint the areas where pathology managers would most benefit from predictive capabilities. We illustrate common issues in managing pathology facilities with an analysis of the surgical specimen process at the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital, which processes all surgical specimens for the Eastern Ontario Regional Laboratory Association. We then show how predictive analytics could be used to support management. Our proposed approach can be generalized beyond the DPLM, contributing to a more effective management of pathology facilities and in turn to quicker clinical diagnoses.
A Fuzzy-Based Decision Support Model for Selecting the Best Dialyser Flux in Haemodialysis.
Oztürk, Necla; Tozan, Hakan
2015-01-01
Decision making is an important procedure for every organization. The procedure is particularly challenging for complicated multi-criteria problems. Selection of dialyser flux is one of the decisions routinely made for haemodialysis treatment provided for chronic kidney failure patients. This study provides a decision support model for selecting the best dialyser flux between high-flux and low-flux dialyser alternatives. The preferences of decision makers were collected via a questionnaire. A total of 45 questionnaires filled by dialysis physicians and nephrologists were assessed. A hybrid fuzzy-based decision support software that enables the use of Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), Analytic Network Process (ANP), and Fuzzy Analytic Network Process (FANP) was used to evaluate the flux selection model. In conclusion, the results showed that a high-flux dialyser is the best. option for haemodialysis treatment.
Predictive Analytics to Support Real-Time Management in Pathology Facilities
Lessard, Lysanne; Michalowski, Wojtek; Chen Li, Wei; Amyot, Daniel; Halwani, Fawaz; Banerjee, Diponkar
2016-01-01
Predictive analytics can provide valuable support to the effective management of pathology facilities. The introduction of new tests and technologies in anatomical pathology will increase the volume of specimens to be processed, as well as the complexity of pathology processes. In order for predictive analytics to address managerial challenges associated with the volume and complexity increases, it is important to pinpoint the areas where pathology managers would most benefit from predictive capabilities. We illustrate common issues in managing pathology facilities with an analysis of the surgical specimen process at the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital, which processes all surgical specimens for the Eastern Ontario Regional Laboratory Association. We then show how predictive analytics could be used to support management. Our proposed approach can be generalized beyond the DPLM, contributing to a more effective management of pathology facilities and in turn to quicker clinical diagnoses. PMID:28269873
NASA Astrophysics Data System (ADS)
Silver, W. L.; Yang, W. H.
2013-12-01
Understanding of the terrestrial nitrogen (N) cycle has grown over the last decade to include a variety of pathways that have the potential to either retain N in the ecosystem or result in losses to the atmosphere or groundwater. Early work has described the mechanics of these N transformations, but the relevance of these processes to ecosystem, regional, or global scale N cycling has not been well quantified. In this study, we review advances in our understanding of the terrestrial N cycle, and focus on three pathways with particular relevance to N retention and loss: dissimilatory nitrate and nitrite reduction to ammonium (DNRA), anaerobic ammonium oxidation (annamox), and anaerobic ammonium oxidation coupled to iron reduction (Feammox). We discuss the role of these processes in the microbial N economy (sensu Burgin et al. 2011) of the terrestrial N cycle, the environmental and ecological constraints, and relationships with other key biogeochemical cycles. We also discuss recent advances in analytical approaches that have improved our ability to detect these and related N fluxes in terrestrial ecosystems. Finally, we present a scaling exercise that identifies the potential importance of these pathways for N retention and loss across a range of spatial and temporal scales, and discuss their significance in terms of N limitation to net primary productivity, N leaching to groundwater, and the release of reactive N gases to the atmosphere.
NASA Astrophysics Data System (ADS)
Larionov, K. B.; Mishakov, I. V.; Gromov, A. A.; Zenkov, A. V.
2017-11-01
Process of brown coal oxidation with 5%wt content of copper-salt additives of various nature (Cu (NO3)2, CuSO4 and Cu(CH3COO)2) was studied. The experiment has been performed by thermogravimetric analysis at a heating rate of 2.5°C/min to a maximum temperature of 600°C in the air. Analytical evaluation of oxidation process kinetic characteristics has been conducted based on the results of TGA. It has been established that addition of initiating agents leads to significant reduction in the initial ignition temperature of coal (ΔTi = 15÷40°C), shortening of the sample warm-up time to the ignition point (Δte = 6÷12 min) and reduction of the sample burning time (Δtf = 40÷54 min). The following series of additives activity affecting ignition temperature of coals has been established: Cu(CH3COO)2 > Cu(NO3)2 > CuSO4. Additionally, the opposite can be said about the effect of additives on residence time of the sample in its combustion area (CuSO4 > Cu(NO3)2 > Cu(CH3COO)2). According to mass spectrometric analysis, presence of NOx, SO2, CO2 (intense peaks at 190÷290°C) was recorded in oxidation products of modified samples, which is explained by partial or complete decomposition of salts.
A multiscale filter for noise reduction of low-dose cone beam projections
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Farr, Jonathan B.
2015-08-01
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, \\text{exp}≤ft(-{{x}2}/2σ f2\\right) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of {σf} , which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ f2 is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singledecker, Steven J.; Jones, Scotty W.; Dorries, Alison M.
2012-07-01
In the coming fiscal years of potentially declining budgets, Department of Energy facilities such as the Los Alamos National Laboratory (LANL) will be looking to reduce the cost of radioactive waste characterization, management, and disposal processes. At the core of this cost reduction process will be choosing the most cost effective, efficient, and accurate methods of radioactive waste characterization. Central to every radioactive waste management program is an effective and accurate waste characterization program. Choosing between methods can determine what is classified as low level radioactive waste (LLRW), transuranic waste (TRU), waste that can be disposed of under an Authorizedmore » Release Limit (ARL), industrial waste, and waste that can be disposed of in municipal landfills. The cost benefits of an accurate radioactive waste characterization program cannot be overstated. In addition, inaccurate radioactive waste characterization of radioactive waste can result in the incorrect classification of radioactive waste leading to higher disposal costs, Department of Transportation (DOT) violations, Notice of Violations (NOVs) from Federal and State regulatory agencies, waste rejection from disposal facilities, loss of operational capabilities, and loss of disposal options. Any one of these events could result in the program that mischaracterized the waste losing its ability to perform it primary operational mission. Generators that produce radioactive waste have four characterization strategies at their disposal: - Acceptable Knowledge/Process Knowledge (AK/PK); - Indirect characterization using a software application or other dose to curie methodologies; - Non-Destructive Analysis (NDA) tools such as gamma spectroscopy; - Direct sampling (e.g. grab samples or Surface Contaminated Object smears) and laboratory analytical; Each method has specific advantages and disadvantages. This paper will evaluate each method detailing those advantages and disadvantages including; - Cost benefit analysis (basic materials costs, overall program operations costs, man-hours per sample analyzed, etc.); - Radiation Exposure As Low As Reasonably Achievable (ALARA) program considerations; - Industrial Health and Safety risks; - Overall Analytical Confidence Level. The concepts in this paper apply to any organization with significant radioactive waste characterization and management activities working to within budget constraints and seeking to optimize their waste characterization strategies while reducing analytical costs. (authors)« less
Performing a local reduction operation on a parallel computer
Blocksome, Michael A; Faraj, Daniel A
2013-06-04
A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.
Performing a local reduction operation on a parallel computer
Blocksome, Michael A.; Faraj, Daniel A.
2012-12-11
A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
The MSCA Program: Developing Analytic Unicorns
ERIC Educational Resources Information Center
Houghton, David M.; Schertzer, Clint; Beck, Scott
2018-01-01
Marketing analytics students who can communicate effectively with decision makers are in high demand. These "analytic unicorns" are hard to find. The Master of Science in Customer Analytics (MSCA) degree program at Xavier University seeks to fill that need. In this paper, we discuss the process of creating the MSCA program. We outline…
Reduction of adverse aerodynamic effects of large trucks, Volume I. Technical report
DOT National Transportation Integrated Search
1978-09-01
The overall objective of this study has been to develop methods of minimizing three aerodynamic-related phenomena: truck-induced aerodynamic disturbances, splash, and spray. An analytical methodology has been developed and used to characterize aerody...
SERS activity studies of Ag/Au bimetallic films prepared by galvanic replacement
NASA Astrophysics Data System (ADS)
Wang, Chaonan; Fang, Jinghuai; Jin, Yonglong
2012-10-01
Ag films on Si substrates were fabricated by immersion plating, which served as sacrificial materials for preparation of Ag/Au bimetallic films by galvanic replacement method. SEM images displayed that the sacrificial Ag films presenting island morphology experienced interesting structural evolution process during galvanic replacement reaction, and nano-scaled holes were formed in the resultant bimetallic films. SERS measurements using crystal violet as an analyte showed that SERS intensities of bimetallic films were enhanced significantly compared with that of pure Ag films and related mechanisms were discussed. Immersion plating experiment carried out on Ag films on PEN substrates fabricated by photoinduced reduction method further confirmed that galvanic replacement is an easy method to fabricate Ag/Au bimetallic and a potential approach to improve the SERS performance of Ag films.
Model reduction by weighted Component Cost Analysis
NASA Technical Reports Server (NTRS)
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
Generic Transport Mechanisms for Molecular Traffic in Cellular Protrusions
NASA Astrophysics Data System (ADS)
Graf, Isabella R.; Frey, Erwin
2017-03-01
Transport of molecular motors along protein filaments in a half-closed geometry is a common feature of biologically relevant processes in cellular protrusions. Using a lattice-gas model we study how the interplay between active and diffusive transport and mass conservation leads to localized domain walls and tip localization of the motors. We identify a mechanism for task sharing between the active motors (maintaining a gradient) and the diffusive motion (transport to the tip), which ensures that energy consumption is low and motor exchange mostly happens at the tip. These features are attributed to strong nearest-neighbor correlations that lead to a strong reduction of active currents, which we calculate analytically using an exact moment identity, and might prove useful for the understanding of correlations and active transport also in more elaborate systems.
NASA Astrophysics Data System (ADS)
Berezin, Sergey; Zayats, Oleg
2018-01-01
We study a friction-controlled slide of a body excited by random motions of the foundation it is placed on. Specifically, we are interested in such quantities as displacement, traveled distance, and energy loss due to friction. We assume that the random excitation is switched off at some time (possibly infinite) and show that the problem can be treated in an analytic, explicit, manner. Particularly, we derive formulas for the moments of the displacement and distance, and also for the average energy loss. To accomplish that we use the Pugachev-Sveshnikov equation for the characteristic function of a continuous random process given by a system of SDEs. This equation is solved by reduction to a parametric Riemann boundary value problem of complex analysis.
Process monitoring and visualization solutions for hot-melt extrusion: a review.
Saerens, Lien; Vervaet, Chris; Remon, Jean Paul; De Beer, Thomas
2014-02-01
Hot-melt extrusion (HME) is applied as a continuous pharmaceutical manufacturing process for the production of a variety of dosage forms and formulations. To ensure the continuity of this process, the quality of the extrudates must be assessed continuously during manufacturing. The objective of this review is to provide an overview and evaluation of the available process analytical techniques which can be applied in hot-melt extrusion. Pharmaceutical extruders are equipped with traditional (univariate) process monitoring tools, observing barrel and die temperatures, throughput, screw speed, torque, drive amperage, melt pressure and melt temperature. The relevance of several spectroscopic process analytical techniques for monitoring and control of pharmaceutical HME has been explored recently. Nevertheless, many other sensors visualizing HME and measuring diverse critical product and process parameters with potential use in pharmaceutical extrusion are available, and were thoroughly studied in polymer extrusion. The implementation of process analytical tools in HME serves two purposes: (1) improving process understanding by monitoring and visualizing the material behaviour and (2) monitoring and analysing critical product and process parameters for process control, allowing to maintain a desired process state and guaranteeing the quality of the end product. This review is the first to provide an evaluation of the process analytical tools applied for pharmaceutical HME monitoring and control, and discusses techniques that have been used in polymer extrusion having potential for monitoring and control of pharmaceutical HME. © 2013 Royal Pharmaceutical Society.
Calibration and Data Processing in Gas Chromatography Combustion Isotope Ratio Mass Spectrometry
Zhang, Ying; Tobias, Herbert J.; Sacks, Gavin L.; Brenna, J. Thomas
2013-01-01
Compound-specific isotope analysis (CSIA) by gas chromatography combustion isotope ratio mass spectrometry (GCC-IRMS) is a powerful technique for the sourcing of substances, such as determination of the geographic or chemical origin of drugs and food adulteration, and it is especially invaluable as a confirmatory tool for detection of the use of synthetic steroids in competitive sport. We review here principles and practices for data processing and calibration of GCC-IRMS data with consideration to anti-doping analyses, with a focus on carbon isotopic analysis (13C/12C). After a brief review of peak definition, the isotopologue signal reduction methods of summation, curve-fitting, and linear regression are described and reviewed. Principles for isotopic calibration are considered in the context of the Δ13C = δ13CM – δ13CE difference measurements required for establishing adverse analytical findings for metabolites relative to endogenous reference compounds. Considerations for the anti-doping analyst are reviewed. PMID:22362612
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
Huff, Mark J; Bodner, Glen E; Fawcett, Jonathan M
2015-04-01
We review and meta-analyze how distinctive encoding alters encoding and retrieval processes and, thus, affects correct and false recognition in the Deese-Roediger-McDermott (DRM) paradigm. Reductions in false recognition following distinctive encoding (e.g., generation), relative to a nondistinctive read-only control condition, reflected both impoverished relational encoding and use of a retrieval-based distinctiveness heuristic. Additional analyses evaluated the costs and benefits of distinctive encoding in within-subjects designs relative to between-group designs. Correct recognition was design independent, but in a within design, distinctive encoding was less effective at reducing false recognition for distinctively encoded lists but more effective for nondistinctively encoded lists. Thus, distinctive encoding is not entirely "cost free" in a within design. In addition to delineating the conditions that modulate the effects of distinctive encoding on recognition accuracy, we discuss the utility of using signal detection indices of memory information and memory monitoring at test to separate encoding and retrieval processes.
NASA Astrophysics Data System (ADS)
Abbate, Agostino; Nayak, A.; Koay, J.; Roy, R. J.; Das, Pankaj K.
1996-03-01
The wavelet transform (WT) has been used to study the nonstationary information in the electroencephalograph (EEG) as an aid in determining the anesthetic depth. A complex analytic mother wavelet is utilized to obtain the time evolution of the various spectral components of the EEG signal. The technique is utilized for the detection and spectral analysis of transient and background processes in the awake and asleep states. It can be observed that the response of both states before the application of the stimulus is similar in amplitude but not in spectral contents, which suggests a background activity of the brain. The brain reacts to the external stimulus in two different modes depending on the state of consciousness of the subject. In the case of awake state, there is an evident increase in response, while for the sleep state a reduction in this activity is observed. This analysis seems to suggest that the brain has an ongoing background process that monitors external stimulus in both the sleep and awake states.
Furan in Thermally Processed Foods - A Review
Seok, Yun-Jeong; Her, Jae-Young; Kim, Yong-Gun; Kim, Min Yeop; Jeong, Soo Young; Kim, Mina K.; Lee, Jee-yeon; Kim, Cho-il; Yoon, Hae-Jung
2015-01-01
Furan (C4H4O) is a volatile compound formed mostly during the thermal processing of foods. The toxicity of furan has been well documented previously, and it was classified as “possible human carcinogen (Group 2B)” by the International Agency for Research on Cancer. Various pathways have been reported for the formation of furan, that is, thermal degradation and/or thermal rearrangement of carbohydrates in the presence of amino acids, thermal degradation of certain amino acids, including aspartic acid, threonine, α-alanine, serine, and cysteine, oxidation of ascorbic acid at higher temperatures, and oxidation of polyunsaturated fatty acids and carotenoids. Owing to the complexity of the formation mechanism, a vast number of studies have been published on monitoring furan in commercial food products and on the potential strategies for reducing furan. Thus, we present a comprehensive review on the current status of commercial food monitoring databases and the possible furan reduction methods. Additionally, we review analytical methods for furan detection and the toxicity of furan. PMID:26483883
Extinction in neutrally stable stochastic Lotka-Volterra models
NASA Astrophysics Data System (ADS)
Dobrinevski, Alexander; Frey, Erwin
2012-05-01
Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.
Extinction in neutrally stable stochastic Lotka-Volterra models.
Dobrinevski, Alexander; Frey, Erwin
2012-05-01
Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.
AERIAL SHOWING COMPLETED REMOTE ANALYTICAL FACILITY (CPP627) ADJOINING FUEL PROCESSING ...
AERIAL SHOWING COMPLETED REMOTE ANALYTICAL FACILITY (CPP-627) ADJOINING FUEL PROCESSING BUILDING AND EXCAVATION FOR HOT PILOT PLANT TO RIGHT (CPP-640). INL PHOTO NUMBER NRTS-60-1221. J. Anderson, Photographer, 3/22/1960 - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID
40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...
40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...
Sigma Metrics Across the Total Testing Process.
Charuruks, Navapun
2017-03-01
Laboratory quality control has been developed for several decades to ensure patients' safety, from a statistical quality control focus on the analytical phase to total laboratory processes. The sigma concept provides a convenient way to quantify the number of errors in extra-analytical and analytical phases through the defect per million and sigma metric equation. Participation in a sigma verification program can be a convenient way to monitor analytical performance continuous quality improvement. Improvement of sigma-scale performance has been shown from our data. New tools and techniques for integration are needed. Copyright © 2016 Elsevier Inc. All rights reserved.
Analytical Ultrasonics in Materials Research and Testing
NASA Technical Reports Server (NTRS)
Vary, A.
1986-01-01
Research results in analytical ultrasonics for characterizing structural materials from metals and ceramics to composites are presented. General topics covered by the conference included: status and advances in analytical ultrasonics for characterizing material microstructures and mechanical properties; status and prospects for ultrasonic measurements of microdamage, degradation, and underlying morphological factors; status and problems in precision measurements of frequency-dependent velocity and attenuation for materials analysis; procedures and requirements for automated, digital signal acquisition, processing, analysis, and interpretation; incentives for analytical ultrasonics in materials research and materials processing, testing, and inspection; and examples of progress in ultrasonics for interrelating microstructure, mechanical properites, and dynamic response.
NASA Astrophysics Data System (ADS)
Falta, R. W.
2004-05-01
Analytical solutions are developed that relate changes in the contaminant mass in a source area to the behavior of biologically reactive dissolved contaminant groundwater plumes. Based on data from field experiments, laboratory experiments, numerical streamtube models, and numerical multiphase flow models, the chemical discharge from a source region is assumed to be a nonlinear power function of the fraction of contaminant mass removed from the source zone. This function can approximately represent source zone mass discharge behavior over a wide range of site conditions ranging from simple homogeneous systems, to complex heterogeneous systems. A mass balance on the source zone with advective transport and first order decay leads to a nonlinear differential equation that is solved analytically to provide a prediction of the time-dependent contaminant mass discharge leaving the source zone. The solution for source zone mass discharge is coupled semi-analytically with a modified version of the Domenico (1987) analytical solution for three-dimensional reactive advective and dispersive transport in groundwater. The semi-analytical model then employs the BIOCHLOR (Aziz et al., 2000; Sun et al., 1999) transformations to model sequential first order parent-daughter biological decay reactions of chlorinated ethenes and ethanes in the groundwater plume. The resulting semi-analytic model thus allows for transient simulation of complex source zone behavior that is fully coupled to a dissolved contaminant plume undergoing sequential biological reactions. Analyses of several realistic scenarios show that substantial changes in the ground water plume can result from the partial removal of contaminant mass from the source zone. These results, however, are sensitive to the nature of the source mass reduction-source discharge reduction curve, and to the rates of degradation of the primary contaminant and its daughter products in the ground water plume. Aziz, C.E., C.J. Newell, J.R. Gonzales, P. Haas, T.P. Clement, and Y. Sun, 2000, BIOCHLOR Natural Attenuation Decision Support System User's Manual Version 1.0, US EPA Report EPA/600/R-00/008 Domenico, P.A., 1987, An analytical model for multidimensional transport of a decaying contaminant species, J. Hydrol., 91: 49-58. Sun, Y., J.N. Petersen, T.P. Clement, and R.S. Skeen, 1999, A new analytical solution for multi-species transport equations with serial and parallel reactions, Water Resour. Res., 35(1): 185-190.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drouot, T.; Gravier, E.; Reveille, T.
This paper presents a study of zonal flows generated by trapped-electron mode and trapped-ion mode micro turbulence as a function of two plasma parameters—banana width and electron temperature. For this purpose, a gyrokinetic code considering only trapped particles is used. First, an analytical equation giving the predicted level of zonal flows is derived from the quasi-neutrality equation of our model, as a function of the density fluctuation levels and the banana widths. Then, the influence of the banana width on the number of zonal flows occurring in the system is studied using the gyrokinetic code. Finally, the impact of themore » temperature ratio T{sub e}/T{sub i} on the reduction of zonal flows is shown and a close link is highlighted between reduction and different gyro-and-bounce-average ion and electron density fluctuation levels. This reduction is found to be due to the amplitudes of gyro-and-bounce-average density perturbations n{sub e} and n{sub i} gradually becoming closer, which is in agreement with the analytical results given by the quasi-neutrality equation.« less
Coupling of Helmholtz resonators to improve acoustic liners for turbofan engines at low frequency
NASA Technical Reports Server (NTRS)
Dean, L. W.
1975-01-01
An analytical and test program was conducted to evaluate means for increasing the effectiveness of low frequency sound absorbing liners for aircraft turbine engines. Three schemes for coupling low frequency absorber elements were considered. These schemes were analytically modeled and their impedance was predicted over a frequency range of 50 to 1,000 Hz. An optimum and two off-optimum designs of the most promising, a parallel coupled scheme, were fabricated and tested in a flow duct facility. Impedance measurements were in good agreement with predicted values and validated the procedure used to transform modeled parameters to hardware designs. Measurements of attenuation for panels of coupled resonators were consistent with predictions based on measured impedance. All coupled resonator panels tested showed an increase in peak attenuation of about 50% and an increase in attenuation bandwidth of one one-third octave band over that measured for an uncoupled panel. These attenuation characteristics equate to about 35% greater reduction in source perceived noise level (PNL), relative to the uncoupled panel, or a reduction in treatment length of about 24% for constant PNL reduction. The increased effectiveness of the coupled resonator concept for attenuation of low frequency broad spectrum noise is demonstrated.
Cunha, Bárbara; Aguiar, Tiago; Carvalho, Sofia B; Silva, Marta M; Gomes, Ricardo A; Carrondo, Manuel J T; Gomes-Alves, Patrícia; Peixoto, Cristina; Serra, Margarida; Alves, Paula M
2017-04-20
To deliver the required cell numbers and doses to therapy, scaling-up production and purification processes (at least to the liter-scale) while maintaining cells' characteristics is compulsory. Therefore, the aim of this work was to prove scalability of an integrated streamlined bioprocess compatible with current good manufacturing practices (cGMP) comprised by cell expansion, harvesting and volume reduction unit operations using human mesenchymal stem cells (hMSC) isolated from bone marrow (BM-MSC) and adipose tissue (AT-MSC). BM-MSC and AT-MSC expansion and harvesting steps were scaled-up from spinner flasks to 2L scale stirred tank single-use bioreactor using synthetic microcarriers and xeno-free medium, ensuring high cellular volumetric productivities (50×10 6 cellL -1 day -1 ), expansion factors (14-16 fold) and cell recovery yields (80%). For the concentration step, flat sheet cassettes (FSC) and hollow fiber cartridges (HF) were compared showing a fairly linear scale-up, with a need to slightly decrease the permeate flux (30-50 LMH, respectively) to maximize cell recovery yield. Nonetheless, FSC allowed to recover 18% more cells after a volume reduction factor of 50. Overall, at the end of the entire bioprocess more than 65% of viable (>95%) hMSC could be recovered without compromising cell's critical quality attributes (CQA) of viability, identity and differentiation potential. Alongside the standard quality assays, a proteomics workflow based on mass spectrometry tools was established to characterize the impact of processing on hMSC's CQA; These analytical tools constitute a powerful tool to be used in process design and development. Copyright © 2017 Elsevier B.V. All rights reserved.
What makes us think? A three-stage dual-process model of analytic engagement.
Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J
2015-08-01
The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. Copyright © 2015 Elsevier Inc. All rights reserved.
Multi-analyte profiling of inflammatory mediators in COPD sputum--the effects of processing.
Pedersen, Frauke; Holz, Olaf; Lauer, Gereon; Quintini, Gianluca; Kiwull-Schöne, Heidrun; Kirsten, Anne-Marie; Magnussen, Helgo; Rabe, Klaus F; Goldmann, Torsten; Watz, Henrik
2015-02-01
Prior to using a new multi-analyte platform for the detection of markers in sputum it is advisable to assess whether sputum processing, especially mucus homogenization by dithiothreitol (DTT), affects the analysis. In this study we tested a novel Human Inflammation Multi Analyte Profiling® Kit (v1.0 Luminex platform; xMAP®). Induced sputum samples of 20 patients with stable COPD (mean FEV1, 59.2% pred.) were processed in parallel using standard processing (with DTT) and a more time consuming sputum dispersion method with phosphate buffered saline (PBS) only. A panel of 47 markers was analyzed in these sputum supernatants by the xMAP®. Twenty-five of 47 analytes have been detected in COPD sputum. Interestingly, 7 markers have been detected in sputum processed with DTT only, or significantly higher levels were observed following DTT treatment (VDBP, α-2-Macroglobulin, haptoglobin, α-1-antitrypsin, VCAM-1, and fibrinogen). However, standard DTT-processing resulted in lower detectable concentrations of ferritin, TIMP-1, MCP-1, MIP-1β, ICAM-1, and complement C3. The correlation between processing methods for the different markers indicates that DTT processing does not introduce a bias by affecting individual sputum samples differently. In conclusion, our data demonstrates that the Luminex-based xMAP® panel can be used for multi-analyte profiling of COPD sputum using the routinely applied method of sputum processing with DTT. However, researchers need to be aware that the absolute concentration of selected inflammatory markers can be affected by DTT. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wise, Alyssa Friend; Vytasek, Jovita Maria; Hausknecht, Simone; Zhao, Yuting
2016-01-01
This paper addresses a relatively unexplored area in the field of learning analytics: how analytics are taken up and used as part of teaching and learning processes. Initial steps are taken towards developing design knowledge for this "middle space," with a focus on students as analytics users. First, a core set of challenges for…
Dissociable meta-analytic brain networks contribute to coordinated emotional processing.
Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R
2018-06-01
Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.
Cardaci, Maurizio; Misuraca, Raffaella
2005-08-01
This paper raises some methodological problems in the dual process explanation provided by Wada and Nittono for their 2004 results using the Wason selection task. We maintain that the Nittono rethinking approach is weak and that it should be refined to grasp better the evidence of analytic processes.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122
Manufacturing data analytics using a virtual factory representation.
Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun
2017-01-01
Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.
Buonfiglio, Marzia; Toscano, M; Puledda, F; Avanzini, G; Di Clemente, L; Di Sabato, F; Di Piero, V
2015-03-01
Habituation is considered one of the most basic mechanisms of learning. Habituation deficit to several sensory stimulations has been defined as a trait of migraine brain and also observed in other disorders. On the other hand, analytic information processing style is characterized by the habit of continually evaluating stimuli and it has been associated with migraine. We investigated a possible correlation between lack of habituation of evoked visual potentials and analytic cognitive style in healthy subjects. According to Sternberg-Wagner self-assessment inventory, 15 healthy volunteers (HV) with high analytic score and 15 HV with high global score were recruited. Both groups underwent visual evoked potentials recordings after psychological evaluation. We observed significant lack of habituation in analytical individuals compared to global group. In conclusion, a reduced habituation of visual evoked potentials has been observed in analytic subjects. Our results suggest that further research should be undertaken regarding the relationship between analytic cognitive style and lack of habituation in both physiological and pathophysiological conditions.
How Do Gut Feelings Feature in Tutorial Dialogues on Diagnostic Reasoning in GP Traineeship?
ERIC Educational Resources Information Center
Stolper, C. F.; Van de Wiel, M. W. J.; Hendriks, R. H. M.; Van Royen, P.; Van Bokhoven, M. A.; Van der Weijden, T.; Dinant, G. J.
2015-01-01
Diagnostic reasoning is considered to be based on the interaction between analytical and non-analytical cognitive processes. Gut feelings, a specific form of non-analytical reasoning, play a substantial role in diagnostic reasoning by general practitioners (GPs) and may activate analytical reasoning. In GP traineeships in the Netherlands, trainees…
Trends in Process Analytical Technology: Present State in Bioprocessing.
Jenzsch, Marco; Bell, Christian; Buziol, Stefan; Kepert, Felix; Wegele, Harald; Hakemeyer, Christian
2017-08-04
Process analytical technology (PAT), the regulatory initiative for incorporating quality in pharmaceutical manufacturing, is an area of intense research and interest. If PAT is effectively applied to bioprocesses, this can increase process understanding and control, and mitigate the risk from substandard drug products to both manufacturer and patient. To optimize the benefits of PAT, the entire PAT framework must be considered and each elements of PAT must be carefully selected, including sensor and analytical technology, data analysis techniques, control strategies and algorithms, and process optimization routines. This chapter discusses the current state of PAT in the biopharmaceutical industry, including several case studies demonstrating the degree of maturity of various PAT tools. Graphical Abstract Hierarchy of QbD components.
UV-Vis as quantification tool for solubilized lignin following a single-shot steam process.
Lee, Roland A; Bédard, Charles; Berberi, Véronique; Beauchet, Romain; Lavoie, Jean-Michel
2013-09-01
In this short communication, UV/Vis was used as an analytical tool for the quantification of lignin concentrations in aqueous mediums. A significant correlation was determined between absorbance and concentration of lignin in solution. For this study, lignin was produced from different types of biomasses (willow, aspen, softwood, canary grass and hemp) using steam processes. Quantification was performed at 212, 225, 237, 270, 280 and 287 nm. UV-Vis quantification of lignin was found suitable for different types of biomass making this a timesaving analytical system that could lead to uses as Process Analytical Tool (PAT) in biorefineries utilizing steam processes or comparable approaches. Copyright © 2013 Elsevier Ltd. All rights reserved.
Analytic information processing style in epilepsy patients.
Buonfiglio, Marzia; Di Sabato, Francesco; Mandillo, Silvia; Albini, Mariarita; Di Bonaventura, Carlo; Giallonardo, Annateresa; Avanzini, Giuliano
2017-08-01
Relevant to the study of epileptogenesis is learning processing, given the pivotal role that neuroplasticity assumes in both mechanisms. Recently, evoked potential analyses showed a link between analytic cognitive style and altered neural excitability in both migraine and healthy subjects, regardless of cognitive impairment or psychological disorders. In this study we evaluated analytic/global and visual/auditory perceptual dimensions of cognitive style in patients with epilepsy. Twenty-five cryptogenic temporal lobe epilepsy (TLE) patients matched with 25 idiopathic generalized epilepsy (IGE) sufferers and 25 healthy volunteers were recruited and participated in three cognitive style tests: "Sternberg-Wagner Self-Assessment Inventory", the C. Cornoldi test series called AMOS, and the Mariani Learning style Questionnaire. Our results demonstrate a significant association between analytic cognitive style and both IGE and TLE and respectively a predominant auditory and visual analytic style (ANOVA: p values <0,0001). These findings should encourage further research to investigate information processing style and its neurophysiological correlates in epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Kelly, David; Budd, Kenneth; Lefebvre, Daniel D.
2006-01-01
The biotransformation of HgII in pH-controlled and aerated algal cultures was investigated. Previous researchers have observed losses in Hg detection in vitro with the addition of cysteine under acid reduction conditions in the presence of SnCl2. They proposed that this was the effect of Hg-thiol complexing. The present study found that cysteine-Hg, protein and nonprotein thiol chelates, and nucleoside chelates of Hg were all fully detectable under acid reduction conditions without previous digestion. Furthermore, organic (R-Hg) mercury compounds could not be detected under either the acid or alkaline reduction conditions, and only β-HgS was detected under alkaline and not under acid SnCl2 reduction conditions. The blue-green alga Limnothrix planctonica biotransformed the bulk of HgII applied as HgCl2 into a form with the analytical properties of β-HgS. Similar results were obtained for the eukaryotic alga Selenastrum minutum. No evidence for the synthesis of organomercurials such as CH3Hg+ was obtained from analysis of either airstream or biomass samples under the aerobic conditions of the study. An analytical procedure that involved both acid and alkaline reduction was developed. It provides the first selective method for the determination of β-HgS in biological samples. Under aerobic conditions, HgII is biotransformed mainly into β-HgS (meta-cinnabar), and this occurs in both prokaryotic and eukaryotic algae. This has important implications with respect to identification of mercury species and cycling in aquatic habitats. PMID:16391065
Kelly, David; Budd, Kenneth; Lefebvre, Daniel D
2006-01-01
The biotransformation of Hg(II) in pH-controlled and aerated algal cultures was investigated. Previous researchers have observed losses in Hg detection in vitro with the addition of cysteine under acid reduction conditions in the presence of SnCl2. They proposed that this was the effect of Hg-thiol complexing. The present study found that cysteine-Hg, protein and nonprotein thiol chelates, and nucleoside chelates of Hg were all fully detectable under acid reduction conditions without previous digestion. Furthermore, organic (R-Hg) mercury compounds could not be detected under either the acid or alkaline reduction conditions, and only beta-HgS was detected under alkaline and not under acid SnCl2 reduction conditions. The blue-green alga Limnothrix planctonica biotransformed the bulk of Hg(II) applied as HgCl2 into a form with the analytical properties of beta-HgS. Similar results were obtained for the eukaryotic alga Selenastrum minutum. No evidence for the synthesis of organomercurials such as CH3Hg+ was obtained from analysis of either airstream or biomass samples under the aerobic conditions of the study. An analytical procedure that involved both acid and alkaline reduction was developed. It provides the first selective method for the determination of beta-HgS in biological samples. Under aerobic conditions, Hg(II) is biotransformed mainly into beta-HgS (meta-cinnabar), and this occurs in both prokaryotic and eukaryotic algae. This has important implications with respect to identification of mercury species and cycling in aquatic habitats.
Patton, C.J.; Fischer, A.E.; Campbell, W.H.; Campbell, E.R.
2002-01-01
Development, characterization, and operational details of an enzymatic, air-segmented continuous-flow analytical method for colorimetric determination of nitrate + nitrite in natural-water samples is described. This method is similar to U.S. Environmental Protection Agency method 353.2 and U.S. Geological Survey method 1-2545-90 except that nitrate is reduced to nitrite by soluble nitrate reductase (NaR, EC 1.6.6.1) purified from corn leaves rather than a packed-bed cadmium reactor. A three-channel, air-segmented continuous-flow analyzer-configured for simultaneous determination of nitrite (0.020-1.000 mg-N/L) and nitrate + nitrite (0.05-5.00 mg-N/L) by the nitrate reductase and cadmium reduction methods-was used to characterize analytical performance of the enzymatic reduction method. At a sampling rate of 90 h-1, sample interaction was less than 1% for all three methods. Method detection limits were 0.001 mg of NO2- -N/L for nitrite, 0.003 mg of NO3-+ NO2- -N/L for nitrate + nitrite by the cadmium-reduction method, and 0.006 mg of NO3- + NO2- -N/L for nitrate + nitrite by the enzymatic-reduction method. Reduction of nitrate to nitrite by both methods was greater than 95% complete over the entire calibration range. The difference between the means of nitrate + nitrite concentrations in 124 natural-water samples determined simultaneously by the two methods was not significantly different from zero at the p = 0.05 level.
Patton, Charles J.; Kryskalla, Jennifer R.
2013-01-01
A multiyear research effort at the U.S. Geological Survey (USGS) National Water Quality Laboratory (NWQL) evaluated several commercially available nitrate reductase (NaR) enzymes as replacements for toxic cadmium in longstanding automated colorimetric air-segmented continuous-flow analyzer (CFA) methods for determining nitrate plus nitrite (NOx) in water. This research culminated in USGS approved standard- and low-level enzymatic reduction, colorimetric automated discrete analyzer NOx methods that have been in routine operation at the NWQL since October 2011. The enzyme used in these methods (AtNaR2) is a product of recombinant expression of NaR from Arabidopsis thaliana (L.) Heynh. (mouseear cress) in the yeast Pichia pastoris. Because the scope of the validation report for these new automated discrete analyzer methods, published as U.S. Geological Survey Techniques and Methods 5–B8, was limited to performance benchmarks and operational details, extensive foundational research with different enzymes—primarily YNaR1, a product of recombinant expression of NaR from Pichia angusta in the yeast Pichia pastoris—remained unpublished until now. This report documents research and development at the NWQL that was foundational to development and validation of the discrete analyzer methods. It includes: (1) details of instrumentation used to acquire kinetics data for several NaR enzymes in the presence and absence of known or suspected inhibitors in relation to reaction temperature and reaction pH; and (2) validation results—method detection limits, precision and bias estimates, spike recoveries, and interference studies—for standard- and low-level automated colorimetric CFA-YNaR1 reduction NOx methods in relation to corresponding USGS approved CFA cadmium-reduction (CdR) NOx methods. The cornerstone of this validation is paired sample statistical and graphical analysis of NOx concentrations from more than 3,800 geographically and seasonally diverse surface-water and groundwater samples that were analyzed in parallel by CFA-CdR and CFA enzyme-reduction methods. Finally, (3) demonstration of a semiautomated batch procedure in which 2-milliliter analyzer cups or disposable spectrophotometer cuvettes serve as reaction vessels for enzymatic reduction of nitrate to nitrite prior to analytical determinations. After the reduction step, analyzer cups are loaded onto CFA, flow injection, or discrete analyzers for simple, rapid, automatic nitrite determinations. In the case of manual determinations, analysts dispense colorimetric reagents into cuvettes containing post-reduction samples, allow time for color to develop, insert cuvettes individually into a spectrophotometer, and record percent transmittance or absorbance in relation to a reagent blank. Data presented here demonstrate equivalent analytical performance of enzymatic reduction NOx methods in these various formats to that of benchmark CFA-CdR NOx methods.
Noise control prediction for high-speed, propeller-driven aircraft
NASA Technical Reports Server (NTRS)
Wilby, J. F.; Rennison, D. C.; Wilby, E. G.; Marsh, A. H.
1980-01-01
An analytical study is described which explores add-on treatments and advanced concepts for the reduction of noise levels in three high-speed aircraft driven by propellers. Noise reductions of 25 to 28 dB are required to achieve a goal of an A-weighted sound level not greater than 80 dB. It is found that only a double-wall system, with a limp inner wall or trim panel, can achieve the required noise reductions. Weight penalties are estimated for the double-wall treatments. These penalties are 0.75% to 1.51% of the aircraft takeoff weight for the particular baseline designs selected.
Food carotenoids: analysis, composition and alterations during storage and processing of foods.
Rodriguez-Amaya, Delia B
2003-01-01
Substantial progress has been achieved in recent years in refining the analytical methods and evaluating the accuracy of carotenoid data. Although carotenoid analysis is inherently difficult and continues to be error prone, more complete and reliable data are now available. Rather than expressing the analytical results as retinol equivalents, there is a tendency to present the concentrations of individual carotenoids, particularly beta-carotene, beta-cryptoxanthin, alpha-carotene, lycopene, lutein and zeaxanthin, carotenoids found in the human plasma and considered to be important to human health in terms of the provitamin A activity and/or reduction of the risk for developing degenerative diseases. With the considerable effort directed to carotenoid analysis, many food sources have now been analyzed in different countries. The carotenoid composition of foods vary qualitatively and quantitatively. Even in a given food, compositional variability occurs because of factors such as stage of maturity, variety or cultivar, climate or season, part of the plant consumed, production practices, post-harvest handling, processing and storage of food. During processing, isomerization of trans-carotenoids, the usual configuration in nature, to the cis-forms occurs, with consequent alteration of the carotenoids' bioavailability and biological activity. Isomerization is promoted by light, heat and acids. The principal cause of carotenoid loss during processing and storage of food is enzymatic or non-enzymatic oxidation of the highly unsaturated carotenoid molecules. The occurrence and extent of oxidation depends on the presence of oxygen, metals, enzymes, unsaturated lipids, prooxidants, antioxidants; exposure to light; type and physical state of the carotenoids present; severity and duration of processing; packaging material; storage conditions. Thus, retention of carotenoids has been the major concern in the preparation, processing and storage of foods. However, in recent years the effect of processing on bioavailability has been focalized. More than a century after their discovery, carotenoids continue to be intensely investigated in various areas. This article aims to give an overview of current knowledge in Food Science and Technology, which has bearing on the role of carotenoids in human health.
NASA Astrophysics Data System (ADS)
Birky, Alicia K.
2008-10-01
Significant reductions in greenhouse emissions from personal transportation will require a transition to an alternative technology regime based on renewable energy sources. Two bodies of research, the quasi-evolutionary (QE) model and the multi-level perspective (MLP) assert that processes within niches play a fundamental role in such transitions. This research asks whether the description of transitions based on this niche hypothesis and its underlying assumptions is consistent with the historical U.S. transition to motor vehicles at the beginning of the 20th century. Unique to this dissertation is the combination of the perspective of the entrepreneur with co-evolutionary approaches to socio-technical transitions. This approach is augmented with concepts from the industry life-cycle model and with a taxonomy of mechanisms of learning. Using this analytic framework, I examine specifically the role of entrepreneurial behavior and processes within and among firms in the co-evolution of technologies and institutions during the transition to motor vehicles. I find that niche markets played an important role in the development of the technology, institutions, and the industry. However, I also find that the diffusion of the automobile is not consistent with the niche hypothesis in the following ways: (1) product improvements and cost reductions were not realized in niche markets, but were achieved simultaneously with diffusion into mass markets; (2) in addition to learning-by-doing and learning-by-interacting with users, knowledge spillovers and interacting with suppliers were critical in this process; (3) cost reductions were not automatic results of expanding markets, but rather arose from the strategies of entrepreneurs based on personal perspectives and values. This finding supports the use of a behavioral approach with a micro-focus in the analysis of socio-technical change. I also find that the emergence and diffusion of the motor vehicle can only be understood by considering the combination of developments and processes in multiple regimes, within niches, and within the wider technical, institutional, and ecological complex (TIEC). For the automobile, the process of regime development was more consistent with a fit-stretch pattern of gradual unfolding and adaptation than one of niche proliferation and rapid regime renewal described in the literature.
Ito, Kaori; Yamamoto, Takayuki; Oyama, Yuriko; Tsuruma, Rieko; Saito, Eriko; Saito, Yoshikazu; Ozu, Takeshi; Honjoh, Tsutomu; Adachi, Reiko; Sakai, Shinobu; Akiyama, Hiroshi; Shoji, Masahiro
2016-09-01
Enzyme-linked immunosorbent assay (ELISA) is commonly used to determine food allergens in food products. However, a significant number of ELISAs give an erroneous result, especially when applied to highly processed food. Accordingly, an improved ELISA, which utilizes an extraction solution comprising the surfactant sodium lauryl sulfate (SDS) and reductant 2-mercaptoethanol (2-ME), has been specially developed to analyze food allergens in highly processed food by enhancing analyte protein extraction. Recently, however, the use of 2-ME has become undesirable. In the present study, a new extraction solution containing a human- and eco-friendly reductant, which is convenient to use at the food manufacturing site, has been established. Among three chemicals with different reducing properties, sodium sulfite, tris(3-hydroxypropyl)phosphine, and mercaptoethylamine sodium sulfite was selected as a 2-ME substitute. The protein extraction ability of SDS/0.1 M sodium sulfite solution was comparable to that of SDS/2-ME solution. Next, the ELISA performance for egg, milk, wheat, peanut, and buckwheat was evaluated by using model-processed foods and commercially available food products. The data showed that the SDS/0.1 M sulfite ELISA significantly correlated with the SDS/2-ME ELISA for all food allergens examined (p < 0.01), thereby establishing the validity of the SDS/0.1 M sulfite ELISA performance. Furthermore, the new SDS/0.1 M sulfite solution was investigated for its applicability to the lateral-flow (LF) test. The result demonstrated the successful analysis of food allergens in processed food, showing consistency with the SDS/0.1 M sulfite ELISA results. Accordingly, a harmonized analysis system for processed food comprising a screening LF test and a quantitative ELISA with identical extraction solution has been established. The ELISA based on the SDS/0.1 M sulfite extraction solution has now been authorized as the revised official method for food allergen analysis in Japan.
Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures
NASA Astrophysics Data System (ADS)
Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.
2014-12-01
In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present preliminary results and discuss how DAWN can be evolved into a powerful tool for designing system architectures for data intensive science.
Mahieu, Nathaniel G.; Spalding, Jonathan L.; Patti, Gary J.
2016-01-01
Motivation: Current informatic techniques for processing raw chromatography/mass spectrometry data break down under several common, non-ideal conditions. Importantly, hydrophilic liquid interaction chromatography (a key separation technology for metabolomics) produces data which are especially challenging to process. We identify three critical points of failure in current informatic workflows: compound specific drift, integration region variance, and naive missing value imputation. We implement the Warpgroup algorithm to address these challenges. Results: Warpgroup adds peak subregion detection, consensus integration bound detection, and intelligent missing value imputation steps to the conventional informatic workflow. When compared with the conventional workflow, Warpgroup made major improvements to the processed data. The coefficient of variation for peaks detected in replicate injections of a complex Escherichia Coli extract were halved (a reduction of 19%). Integration regions across samples were much more robust. Additionally, many signals lost by the conventional workflow were ‘rescued’ by the Warpgroup refinement, thereby resulting in greater analyte coverage in the processed data. Availability and implementation: Warpgroup is an open source R package available on GitHub at github.com/nathaniel-mahieu/warpgroup. The package includes example data and XCMS compatibility wrappers for ease of use. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: nathaniel.mahieu@wustl.edu or gjpattij@wustl.edu PMID:26424859
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Minjing; Gao, Yuqian; Qian, Wei-Jun
Microbially mediated biogeochemical processes are catalyzed by enzymes that control the transformation of carbon, nitrogen, and other elements in environment. The dynamic linkage between enzymes and biogeochemical species transformation has, however, rarely been investigated because of the lack of analytical approaches to efficiently and reliably quantify enzymes and their dynamics in soils and sediments. Herein, we developed a signature peptide-based technique for sensitively quantifying dissimilatory and assimilatory enzymes using nitrate-reducing enzymes in a hyporheic zone sediment as an example. Moreover, the measured changes in enzyme concentration were found to correlate with the nitrate reduction rate in a way different frommore » that inferred from biogeochemical models based on biomass or functional genes as surrogates for functional enzymes. This phenomenon has important implications for understanding and modeling the dynamics of microbial community functions and biogeochemical processes in environments. Our results also demonstrate the importance of enzyme quantification for the identification and interrogation of those biogeochemical processes with low metabolite concentrations as a result of faster enzyme-catalyzed consumption of metabolites than their production. The dynamic enzyme behaviors provide a basis for the development of enzyme-based models to describe the relationship between the microbial community and biogeochemical processes.« less
The problem of self-disclosure in psychoanalysis.
Meissner, W W
2002-01-01
The problem of self-disclosure is explored in relation to currently shifting paradigms of the nature of the analytic relation and analytic interaction. Relational and intersubjective perspectives emphasize the role of self-disclosure as not merely allowable, but as an essential facilitating aspect of the analytic dialogue, in keeping with the role of the analyst as a contributing partner in the process. At the opposite extreme, advocates of classical anonymity stress the importance of neutrality and abstinence. The paper seeks to chart a course between unconstrained self-disclosure and absolute anonymity, both of which foster misalliances. Self-disclosure is seen as at times contributory to the analytic process, and at times deleterious. The decision whether to self-disclose, what to disclose, and when and how, should be guided by the analyst's perspective on neutrality, conceived as a mental stance in which the analyst assesses and decides what, at any given point, seems to contribute to the analytic process and the patient's therapeutic benefit. The major risk in self-disclosure is the tendency to draw the analytic interaction into the real relation between analyst and patient, thus diminishing or distorting the therapeutic alliance, mitigating transference expression, and compromising therapeutic effectiveness.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
Kling, Maximilian; Seyring, Nicole; Tzanova, Polia
2016-09-01
Economic instruments provide significant potential for countries with low municipal waste management performance in decreasing landfill rates and increasing recycling rates for municipal waste. In this research, strengths and weaknesses of landfill tax, pay-as-you-throw charging systems, deposit-refund systems and extended producer responsibility schemes are compared, focusing on conditions in countries with low waste management performance. In order to prioritise instruments for implementation in these countries, the analytic hierarchy process is applied using results of a literature review as input for the comparison. The assessment reveals that pay-as-you-throw is the most preferable instrument when utility-related criteria are regarded (wb = 0.35; analytic hierarchy process distributive mode; absolute comparison) mainly owing to its waste prevention effect, closely followed by landfill tax (wb = 0.32). Deposit-refund systems (wb = 0.17) and extended producer responsibility (wb = 0.16) rank third and fourth, with marginal differences owing to their similar nature. When cost-related criteria are additionally included in the comparison, landfill tax seems to provide the highest utility-cost ratio. Data from literature concerning cost (contrary to utility-related criteria) is currently not sufficiently available for a robust ranking according to the utility-cost ratio. In general, the analytic hierarchy process is seen as a suitable method for assessing economic instruments in waste management. Independent from the chosen analytic hierarchy process mode, results provide valuable indications for policy-makers on the application of economic instruments, as well as on their specific strengths and weaknesses. Nevertheless, the instruments need to be put in the country-specific context along with the results of this analytic hierarchy process application before practical decisions are made. © The Author(s) 2016.
Insight solutions are correct more often than analytic solutions
Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark
2016-01-01
How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960
Sirichai, S; de Mello, A J
2001-01-01
The separation and detection of both print and film developing agents (CD-3 and CD-4) in photographic processing solutions using chip-based capillary electrophoresis is presented. For simultaneous detection of both analytes under identical experimental conditions a buffer pH of 11.9 is used to partially ionise the analytes. Detection is made possible by indirect fluorescence, where the ions of the analytes displace the anionic fluorescing buffer ion to create negative peaks. Under optimal conditions, both analytes can be analyzed within 30 s. The limits of detection for CD-3 and CD-4 are 0.17 mM and 0.39 mM, respectively. The applicability of the method for the analysis of seasoned photographic processing developer solutions is also examined.
The heuristic-analytic theory of reasoning: extension and evaluation.
Evans, Jonathan St B T
2006-06-01
An extensively revised heuristic-analytic theory of reasoning is presented incorporating three principles of hypothetical thinking. The theory assumes that reasoning and judgment are facilitated by the formation of epistemic mental models that are generated one at a time (singularity principle) by preconscious heuristic processes that contextualize problems in such a way as to maximize relevance to current goals (relevance principle). Analytic processes evaluate these models but tend to accept them unless there is good reason to reject them (satisficing principle). At a minimum, analytic processing of models is required so as to generate inferences or judgments relevant to the task instructions, but more active intervention may result in modification or replacement of default models generated by the heuristic system. Evidence for this theory is provided by a review of a wide range of literature on thinking and reasoning.
Esmonde-White, Karen A; Cuellar, Maryann; Uerpmann, Carsten; Lenain, Bruno; Lewis, Ian R
2017-01-01
Adoption of Quality by Design (QbD) principles, regulatory support of QbD, process analytical technology (PAT), and continuous manufacturing are major factors effecting new approaches to pharmaceutical manufacturing and bioprocessing. In this review, we highlight new technology developments, data analysis models, and applications of Raman spectroscopy, which have expanded the scope of Raman spectroscopy as a process analytical technology. Emerging technologies such as transmission and enhanced reflection Raman, and new approaches to using available technologies, expand the scope of Raman spectroscopy in pharmaceutical manufacturing, and now Raman spectroscopy is successfully integrated into real-time release testing, continuous manufacturing, and statistical process control. Since the last major review of Raman as a pharmaceutical PAT in 2010, many new Raman applications in bioprocessing have emerged. Exciting reports of in situ Raman spectroscopy in bioprocesses complement a growing scientific field of biological and biomedical Raman spectroscopy. Raman spectroscopy has made a positive impact as a process analytical and control tool for pharmaceutical manufacturing and bioprocessing, with demonstrated scientific and financial benefits throughout a product's lifecycle.
Impact of Educational Activities in Reducing Pre-Analytical Laboratory Errors
Al-Ghaithi, Hamed; Pathare, Anil; Al-Mamari, Sahimah; Villacrucis, Rodrigo; Fawaz, Naglaa; Alkindi, Salam
2017-01-01
Objectives Pre-analytic errors during diagnostic laboratory investigations can lead to increased patient morbidity and mortality. This study aimed to ascertain the effect of educational nursing activities on the incidence of pre-analytical errors resulting in non-conforming blood samples. Methods This study was conducted between January 2008 and December 2015. All specimens received at the Haematology Laboratory of the Sultan Qaboos University Hospital, Muscat, Oman, during this period were prospectively collected and analysed. Similar data from 2007 were collected retrospectively and used as a baseline for comparison. Non-conforming samples were defined as either clotted samples, haemolysed samples, use of the wrong anticoagulant, insufficient quantities of blood collected, incorrect/lack of labelling on a sample or lack of delivery of a sample in spite of a sample request. From 2008 onwards, multiple educational training activities directed at the hospital nursing staff and nursing students primarily responsible for blood collection were implemented on a regular basis. Results After initiating corrective measures in 2008, a progressive reduction in the percentage of non-conforming samples was observed from 2009 onwards. Despite a 127.84% increase in the total number of specimens received, there was a significant reduction in non-conforming samples from 0.29% in 2007 to 0.07% in 2015, resulting in an improvement of 75.86% (P <0.050). In particular, specimen identification errors decreased by 0.056%, with a 96.55% improvement. Conclusion Targeted educational activities directed primarily towards hospital nursing staff had a positive impact on the quality of laboratory specimens by significantly reducing pre-analytical errors. PMID:29062553
Impact of Educational Activities in Reducing Pre-Analytical Laboratory Errors: A quality initiative.
Al-Ghaithi, Hamed; Pathare, Anil; Al-Mamari, Sahimah; Villacrucis, Rodrigo; Fawaz, Naglaa; Alkindi, Salam
2017-08-01
Pre-analytic errors during diagnostic laboratory investigations can lead to increased patient morbidity and mortality. This study aimed to ascertain the effect of educational nursing activities on the incidence of pre-analytical errors resulting in non-conforming blood samples. This study was conducted between January 2008 and December 2015. All specimens received at the Haematology Laboratory of the Sultan Qaboos University Hospital, Muscat, Oman, during this period were prospectively collected and analysed. Similar data from 2007 were collected retrospectively and used as a baseline for comparison. Non-conforming samples were defined as either clotted samples, haemolysed samples, use of the wrong anticoagulant, insufficient quantities of blood collected, incorrect/lack of labelling on a sample or lack of delivery of a sample in spite of a sample request. From 2008 onwards, multiple educational training activities directed at the hospital nursing staff and nursing students primarily responsible for blood collection were implemented on a regular basis. After initiating corrective measures in 2008, a progressive reduction in the percentage of non-conforming samples was observed from 2009 onwards. Despite a 127.84% increase in the total number of specimens received, there was a significant reduction in non-conforming samples from 0.29% in 2007 to 0.07% in 2015, resulting in an improvement of 75.86% ( P <0.050). In particular, specimen identification errors decreased by 0.056%, with a 96.55% improvement. Targeted educational activities directed primarily towards hospital nursing staff had a positive impact on the quality of laboratory specimens by significantly reducing pre-analytical errors.
NASA Astrophysics Data System (ADS)
Rose, K.; Bauer, J. R.; Baker, D. V.
2015-12-01
As big data computing capabilities are increasingly paired with spatial analytical tools and approaches, there is a need to ensure uncertainty associated with the datasets used in these analyses is adequately incorporated and portrayed in results. Often the products of spatial analyses, big data and otherwise, are developed using discontinuous, sparse, and often point-driven data to represent continuous phenomena. Results from these analyses are generally presented without clear explanations of the uncertainty associated with the interpolated values. The Variable Grid Method (VGM) offers users with a flexible approach designed for application to a variety of analyses where users there is a need to study, evaluate, and analyze spatial trends and patterns while maintaining connection to and communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analysis through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom 'Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation via custom multi-step Hadoop applications, performance benchmarking comparisons, and Hadoop-centric opportunities for greater parallelization of geospatial operations. The presentation includes examples of the approach being applied to a range of subsurface, geospatial studies (e.g. induced seismicity risk).
INTEGRATED ENVIRONMENTAL ASSESSMENT OF THE MID-ATLANTIC REGION WITH ANALYTICAL NETWORK PROCESS
A decision analysis method for integrating environmental indicators was developed. This was a combination of Principal Component Analysis (PCA) and the Analytic Network Process (ANP). Being able to take into account interdependency among variables, the method was capable of ran...
Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene
2018-05-16
Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.
1991-09-01
iv III. THE ANALYTIC HIERARCHY PROCESS ..... ........ 15 A. INTRODUCTION ...... ................. 15 B. THE AHP PROCESS ...... ................ 16 C...INTRODUCTION ...... ................. 26 B. IMPLEMENTATION OF CERTS USING AHP ........ .. 27 1. Consistency ...... ................ 29 2. User Interface...the proposed technique into a Decision Support System. Expert Choice implements the Analytic Hierarchy Process ( AHP ), an approach to multi- criteria
ERIC Educational Resources Information Center
Fisher, James E.; Sealey, Ronald W.
The study describes the analytical pragmatic structure of concepts and applies this structure to the legal concept of procedural due process. This structure consists of form, purpose, content, and function. The study conclusions indicate that the structure of the concept of procedural due process, or any legal concept, is not the same as the…
Heuristic and analytic processing in online sports betting.
d'Astous, Alain; Di Gaspero, Marc
2015-06-01
This article presents the results of two studies that examine the occurrence of heuristic (i.e., intuitive and fast) and analytic (i.e., deliberate and slow) processes among people who engage in online sports betting on a regular basis. The first study was qualitative and was conducted with a convenience sample of 12 regular online sports gamblers who described the processes by which they arrive at a sports betting decision. The results of this study showed that betting online on sports events involves a mix of heuristic and analytic processes. The second study consisted in a survey of 161 online sports gamblers where performance in terms of monetary gains, experience in online sports betting, propensity to collect and analyze relevant information prior to betting, and use of bookmaker odds were measured. This study showed that heuristic and analytic processes act as mediators of the relationship between experience and performance. The findings stemming of these two studies give some insights into gamblers' modes of thinking and behaviors in an online sports betting context and show the value of the dual mediation process model for research that looks at gambling activities from a judgment and decision making perspective.
Faigen, Zachary; Deyneka, Lana; Ising, Amy; Neill, Daniel; Conway, Mike; Fairchild, Geoffrey; Gunn, Julia; Swenson, David; Painter, Ian; Johnson, Lauren; Kiley, Chris; Streichert, Laura
2015-01-01
Introduction: We document a funded effort to bridge the gap between constrained scientific challenges of public health surveillance and methodologies from academia and industry. Component tasks are the collection of epidemiologists’ use case problems, multidisciplinary consultancies to refine them, and dissemination of problem requirements and shareable datasets. We describe an initial use case and consultancy as a concrete example and challenge to developers. Materials and Methods: Supported by the Defense Threat Reduction Agency Biosurveillance Ecosystem project, the International Society for Disease Surveillance formed an advisory group to select tractable use case problems and convene inter-disciplinary consultancies to translate analytic needs into well-defined problems and to promote development of applicable solution methods. The initial consultancy’s focus was a problem originated by the North Carolina Department of Health and its NC DETECT surveillance system: Derive a method for detection of patient record clusters worthy of follow-up based on free-text chief complaints and without syndromic classification. Results: Direct communication between public health problem owners and analytic developers was informative to both groups and constructive for the solution development process. The consultancy achieved refinement of the asyndromic detection challenge and of solution requirements. Participants summarized and evaluated solution approaches and discussed dissemination and collaboration strategies. Practice Implications: A solution meeting the specification of the use case described above could improve human monitoring efficiency with expedited warning of events requiring follow-up, including otherwise overlooked events with no syndromic indicators. This approach can remove obstacles to collaboration with efficient, minimal data-sharing and without costly overhead. PMID:26834939
Faigen, Zachary; Deyneka, Lana; Ising, Amy; Neill, Daniel; Conway, Mike; Fairchild, Geoffrey; Gunn, Julia; Swenson, David; Painter, Ian; Johnson, Lauren; Kiley, Chris; Streichert, Laura; Burkom, Howard
2015-01-01
We document a funded effort to bridge the gap between constrained scientific challenges of public health surveillance and methodologies from academia and industry. Component tasks are the collection of epidemiologists' use case problems, multidisciplinary consultancies to refine them, and dissemination of problem requirements and shareable datasets. We describe an initial use case and consultancy as a concrete example and challenge to developers. Supported by the Defense Threat Reduction Agency Biosurveillance Ecosystem project, the International Society for Disease Surveillance formed an advisory group to select tractable use case problems and convene inter-disciplinary consultancies to translate analytic needs into well-defined problems and to promote development of applicable solution methods. The initial consultancy's focus was a problem originated by the North Carolina Department of Health and its NC DETECT surveillance system: Derive a method for detection of patient record clusters worthy of follow-up based on free-text chief complaints and without syndromic classification. Direct communication between public health problem owners and analytic developers was informative to both groups and constructive for the solution development process. The consultancy achieved refinement of the asyndromic detection challenge and of solution requirements. Participants summarized and evaluated solution approaches and discussed dissemination and collaboration strategies. A solution meeting the specification of the use case described above could improve human monitoring efficiency with expedited warning of events requiring follow-up, including otherwise overlooked events with no syndromic indicators. This approach can remove obstacles to collaboration with efficient, minimal data-sharing and without costly overhead.
Contains basic information on the role and origins of the Selected Analytical Methods including the formation of the Homeland Security Laboratory Capacity Work Group and the Environmental Evaluation Analytical Process Roadmap for Homeland Security Events
NASA Technical Reports Server (NTRS)
Sauer, Richard L. (Inventor); Akse, James R. (Inventor); Thompson, John O. (Inventor); Atwater, James E. (Inventor)
1999-01-01
Ammonia monitor and method of use are disclosed. A continuous, real-time determination of the concentration of ammonia in an aqueous process stream is possible over a wide dynamic range of concentrations. No reagents are required because pH is controlled by an in-line solid-phase base. Ammonia is selectively transported across a membrane from the process stream to an analytical stream to an analytical stream under pH control. The specific electrical conductance of the analytical stream is measured and used to determine the concentration of ammonia.
The Analytic Hierarchy Process and Participatory Decisionmaking
Daniel L. Schmoldt; Daniel L. Peterson; Robert L. Smith
1995-01-01
Managing natural resource lands requires social, as well as biophysical, considerations. Unfortunately, it is extremely difficult to accurately assess and quantify changing social preferences, and to aggregate conflicting opinions held by diverse social groups. The Analytic Hierarchy Process (AHP) provides a systematic, explicit, rigorous, and robust mechanism for...
Using Analytic Hierarchy Process in Textbook Evaluation
ERIC Educational Resources Information Center
Kato, Shigeo
2014-01-01
This study demonstrates the application of the analytic hierarchy process (AHP) in English language teaching materials evaluation, focusing in particular on its potential for systematically integrating different components of evaluation criteria in a variety of teaching contexts. AHP is a measurement procedure wherein pairwise comparisons are made…
Quantifying Spot Size Reduction of a 1.8 kA Electron Beam for Flash Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burris-Mog, Trevor John; Moir, David C.
The spot size of Axis-I at the Dual Axis Radiographic Hydrodynamic Test facility was reduced by 15.5% by including a small diameter drift tube that acts to aperture the outer diameter of the electron beam. Comparing the measured values to both analytic calculations and results from a particle-in-cell model shows that one-third to one-half of the spot size reduction is due to a drop in beam emittance. We infer that one-half to two-thirds of the spot-size reduction is due to a reduction in beam-target interactions. Sources of emittance growth and the scaling of the final focal spot size with emittancemore » and solenoid aberrations are also presented.« less
Quantifying Spot Size Reduction of a 1.8 kA Electron Beam for Flash Radiography
Burris-Mog, Trevor John; Moir, David C.
2018-03-14
The spot size of Axis-I at the Dual Axis Radiographic Hydrodynamic Test facility was reduced by 15.5% by including a small diameter drift tube that acts to aperture the outer diameter of the electron beam. Comparing the measured values to both analytic calculations and results from a particle-in-cell model shows that one-third to one-half of the spot size reduction is due to a drop in beam emittance. We infer that one-half to two-thirds of the spot-size reduction is due to a reduction in beam-target interactions. Sources of emittance growth and the scaling of the final focal spot size with emittancemore » and solenoid aberrations are also presented.« less
NASA Technical Reports Server (NTRS)
Barney, Timothy A.; Shin, Y. S.; Agrawal, B. N.
2001-01-01
This research develops an adaptive controller that actively suppresses a single frequency disturbance source at a remote position and tests the system on the NPS Space Truss. The experimental results are then compared to those predicted by an ANSYS finite element model. The NPS space truss is a 3.7-meter long truss that simulates a space-borne appendage with sensitive equipment mounted at its extremities. One of two installed piezoelectric actuators and an Adaptive Multi-Layer LMS control law were used to effectively eliminate an axial component of the vibrations induced by a linear proof mass actuator mounted at one end of the truss. Experimental and analytical results both demonstrate reductions to the level of system noise. Vibration reductions in excess of 50dB were obtained through experimentation and over 100dB using ANSYS, demonstrating the ability to model this system with a finite element model. This report also proposes a method to use distributed quartz accelerometers to evaluate the location, direction, and energy of impacts on the NPS space truss using the dSPACE data acquisition and processing system to capture the structural response and compare it to known reference Signals.
The prophylactic reduction of aluminium intake.
Lione, A
1983-02-01
The use of modern analytical methods has demonstrated that aluminium salts can be absorbed from the gut and concentrated in various human tissues, including bone, the parathyroids and brain. The neurotoxicity of aluminium has been extensively characterized in rabbits and cats, and high concentrations of aluminium have been detected in the brain tissue of patients with Alzheimer's disease. Various reports have suggested that high aluminium intakes may be harmful to some patients with bone disease or renal impairment. Fatal aluminium-induced neuropathies have been reported in patients on renal dialysis. Since there are no demonstrable consequences of aluminium deprivation, the prophylactic reduction of aluminium intake by many patients would appear prudent. In this report, the major sources of aluminium in foods and non-prescription drugs are summarized and alternative products are described. The most common foods that contain substantial amounts of aluminium-containing additives include some processed cheeses, baking powders, cake mixes, frozen doughs, pancake mixes, self-raising flours and pickled vegetables. The aluminium-containing non-prescription drugs include some antacids, buffered aspirins, antidiarrhoeal products, douches and haemorrhoidal medications. The advisability of recommending a low aluminium diet for geriatric patients is discussed in detail.
Reduction of polyatomic interferences in ICP-MS by collision/reaction cell (CRC-ICP-MS) techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eiden, Greg C; Barinaga, Charles J; Koppenaal, David W
2012-05-01
Polyatomic and other spectral interferences in plasma source mass spectrometry (PSMS) can be dramatically reduced using collision and reaction cells (CRC). These devices have been used for decades in fundamental studies of ion-molecule chemistry, but have only recently been applied to PSMS. Benefits of this approach as applied in inductively coupled plasma MS (ICP-MS) include interference reduction, isobar separation, and thermalization/focusing of ions. Novel ion-molecule chemistry schemes are now routinely designed and empirically evaluated with relative ease. These “chemical resolution” techniques can avert interferences requiring mass spectral resolutions of >600,000 (m/m). Purely physical ion beam processes, including collisional dampening andmore » collisional dissociation, are also employed to provide improved sensitivity, resolution, and spectral simplicity. CRC techniques are now firmly entrenched in current-day ICP-MS technology, enabling unprecedented flexibility and freedom from many spectral interferences. A significant body of applications has now been reported in the literature. CRC techniques are found to be most useful for specialized or difficult analytical needs and situations, and are employed in both single- and multi-element determination modes.« less
Atmospheric Delay Reduction Using KARAT for GPS Analysis and Implications for VLBI
NASA Technical Reports Server (NTRS)
Ichikawa, Ryuichi; Hobiger, Thomas; Koyama, Yasuhiro; Kondo, Tetsuro
2010-01-01
We have been developing a state-of-the-art tool to estimate the atmospheric path delays by raytracing through mesoscale analysis (MANAL) data, which is operationally used for numerical weather prediction by the Japan Meteorological Agency (JMA). The tools, which we have named KAshima RAytracing Tools (KARAT)', are capable of calculating total slant delays and ray-bending angles considering real atmospheric phenomena. The KARAT can estimate atmospheric slant delays by an analytical 2-D ray-propagation model by Thayer and a 3-D Eikonal solver. We compared PPP solutions using KARAT with that using the Global Mapping Function (GMF) and Vienna Mapping Function 1 (VMF1) for GPS sites of the GEONET (GPS Earth Observation Network System) operated by Geographical Survey Institute (GSI). In our comparison 57 stations of GEONET during the year of 2008 were processed. The KARAT solutions are slightly better than the solutions using VMF1 and GMF with linear gradient model for horizontal and height positions. Our results imply that KARAT is a useful tool for an efficient reduction of atmospheric path delays in radio-based space geodetic techniques such as GNSS and VLBI.
77 FR 13607 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-07
... Transformation Grants: Use of System Dynamic Modeling and Economic Analysis in Select Communities--New--National... community interventions. Using a system dynamics approach, CDC also plans to conduct simulation modeling... the development of analytic tools for system dynamics modeling under more limited conditions. The...
Principles of operation and data reduction techniques for the loft drag disc turbine transducer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverman, S.
An analysis of the single- and two-phase flow data applicable to the loss-of-fluid test (LOFT) is presented for the LOFT drag turbine transducer. Analytical models which were employed to correlate the experimental data are presented.
On the evaluation of derivatives of Gaussian integrals
NASA Technical Reports Server (NTRS)
Helgaker, Trygve; Taylor, Peter R.
1992-01-01
We show that by a suitable change of variables, the derivatives of molecular integrals over Gaussian-type functions required for analytic energy derivatives can be evaluated with significantly less computational effort than current formulations. The reduction in effort increases with the order of differentiation.
Automation of an ion chromatograph for precipitation analysis with computerized data reduction
Hedley, Arthur G.; Fishman, Marvin J.
1982-01-01
Interconnection of an ion chromatograph, an autosampler, and a computing integrator to form an analytical system for simultaneous determination of fluoride, chloride, orthophosphate, bromide, nitrate, and sulfate in precipitation samples is described. Computer programs provided with the integrator are modified to implement ionchromatographic data reduction and data storage. The liquid-flow scheme for the ion chromatograph is changed by addition of a second suppressor column for greater analytical capacity. An additional vave enables selection of either suppressor column for analysis, as the other column is regenerated and stabilized with concentrated eluent.Minimum limits of detection and quantitation for each anion are calculated; these limits are a function of suppressor exhaustion. Precision for replicate analyses of six precipitation samples for fluoride, chloride, orthophosphate, nitrate, and sulfate ranged from 0.003 to 0.027 milligrams per liter. To determine accuracy of results, the same samples were spiked with known concentrations of the above mentioned anions. Average recovery was 108 percent.
Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach
NASA Astrophysics Data System (ADS)
Pinto, Rafael S.; Saa, Alberto
2015-12-01
A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.
Analysis of the evolution of the instability process of a coastal cavern
NASA Astrophysics Data System (ADS)
Lollino, P.; Reina, A.
2012-04-01
This work concerns the study performed for the interpretation of the potential failure mechanism of a large natural cavern, which is located along the rocky cliffs of Polignano a Mare town (Apulia, Southern Italy) under an intensely urbanised area. This cavern, which is located at the sea level, was formed due to an intense process of salt and wave erosion, mainly acting during sea storms, within a rock mass formed of a lower stratified limestone mass and an upper soft calcarenite mass. Therefore, the influence of the climatic factors and of the upward erosion process within the cavern has been specifically investigated. At present, the thickness of the cave roof, which has a dome shape, is less than 10 metres in the centre and several buildings are founded on the ground surface above. In 2006 a large calcarenite block, of about 1.5 m diameter, fell down from the roof of the cavern and afterwards field and laboratory investigations as well as both simple analytical methods and elasto-plastic numerical modelling were carried out in order to assess the current state of the roof and to interpret the effects of the potential evolution of the inner erosion and of the local failure processes of the cave. As such, a detailed geo-structural survey has firstly been carried out, together with laboratory and in-situ testing for the geomechanical characterisation of the rock materials and of the corresponding joints. An analysis of the sea storms occurred within the observation period has also been performed by considering daily rainfall and wind data. The rate of erosion has been measured by means of special nets installed at the sea level to collect the material falling down from the roof and the corresponding measurements, which lasted for about one year, indicate an erosion rate of at least 0.005 m3/month. A structural monitoring system, including extensometers and joint-meters, was also installed in several points of the cave in order to measure eventual block displacements within the cavern and the results show some correlations of the logged data with the occurrence of sea storms and in general with the weather factors. The results of both simplified analytical methods and numerical analysis show that the cave is at present stable. Also, numerical modelling was aimed at defining the conditions under which general failure of the cave might occur. In particular, sensitivity analyses have been carried out to assess the influence of specific factors, as the progressive reduction of the roof thickness due to erosion or the gradual reduction of the strength properties of the calcarenite due to weathering processes.
Lombardi, Giovanni; Sansoni, Veronica; Banfi, Giuseppe
2017-08-01
In the last few years, a growing number of molecules have been associated to an endocrine function of the skeletal muscle. Circulating myokine levels, in turn, have been associated with several pathophysiological conditions including the cardiovascular ones. However, data from different studies are often not completely comparable or even discordant. This would be due, at least in part, to the whole set of situations related to the preparation of the patient prior to blood sampling, blood sampling procedure, processing and/or store. This entire process constitutes the pre-analytical phase. The importance of the pre-analytical phase is often not considered. However, in routine diagnostics, the 70% of the errors are in this phase. Moreover, errors during the pre-analytical phase are carried over in the analytical phase and affects the final output. In research, for example, when samples are collected over a long time and by different laboratories, a standardized procedure for sample collecting and the correct procedure for sample storage are acknowledged. In this review, we discuss the pre-analytical variables potentially affecting the measurement of myokines with cardiovascular functions.
Modeling and Characterization of Dynamic Failure of Soda-lime Glass Under High Speed Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenning N.; Sun, Xin; Chen, Weinong W.
2012-05-27
In this paper, the impact-induced dynamic failure of a soda-lime glass block is studied using an integrated experimental/analytical approach. The Split Hopkinson Pressure Bar (SHPB) technique is used to conduct dynamic failure test of soda-lime glass first. The damage growth patterns and stress histories are reported for various glass specimen designs. Making use of a continuum damage mechanics (CDM)-based constitutive model, the initial failure and subsequent stiffness reduction of glass are simulated and investigated. Explicit finite element analyses are used to simulate the glass specimen impact event. A maximum shear stress-based damage evolution law is used in describing the glassmore » damage process under combined compression/shear loading. The impact test results are used to quantify the critical shear stress for the soda-lime glass under examination.« less
Towards improving the ethics of ecological research.
Crozier, G K D; Schulte-Hostedde, Albrecht I
2015-06-01
We argue that the ecological research community should develop a plan for improving the ethical consistency and moral robustness of the field. We propose a particular ethics strategy--specifically, an ongoing process of collective ethical reflection that the community of ecological researchers, with the cooperation of applied ethicists and philosophers of biology, can use to address the needs we identify. We suggest a particular set of conceptual (in the form of six core values--freedom, fairness, well being, replacement, reduction, and refinement) and analytic (in the forms of decision theoretic software, 1000Minds) tools that, we argue, collectively have the resources to provide an empirically grounded and conceptually complete foundation for an ethics strategy for ecological research. We illustrate our argument with information gathered from a survey of ecologists conducted at the 2013 meeting of the Canadian Society of Ecology and Evolution.
Effect of solar-cell junction geometry on open-circuit voltage
NASA Technical Reports Server (NTRS)
Weizer, V. G.; Godlewski, M. P.
1985-01-01
Simple analytical models have been found that adequately describe the voltage behavior of both the stripe junction and dot junction grating cells as a function of junction area. While the voltage in the former case is found to be insensitive to junction area reduction, significant voltage increases are shown to be possible for the dot junction cell. With regard to cells in which the junction area has been increased in a quest for better performance, it was found that (1) texturation does not affect the average saturation current density J0, indicating that the texturation process is equivalent to a simple extension of junction area by a factor of square root of 3 and (2) the vertical junction cell geometry produces a sizable decrease in J0 that, unfortunately, is more than offset by the effects of attendant areal increases.
NASA Astrophysics Data System (ADS)
Ahmad, Muthanna
2016-10-01
This work describes a new application of the solvothermal method, based on the microwave heating, for the synthesis of nano and microparticles of selenium. The reaction of selenium with hydrofluoric acid on the silicon surface is induced by microwave irradiation under high pressure and temperature of 60 bar and 160 °C, respectively. This method allows the deposition of spherical-like particles on the in situ etched silicon surface. The size of deposited selenium spheres scales from tens of nanometers up to tens of micrometers. The morphology and composition of the deposited selenium were analyzed by various analytical techniques. The formation dynamic of spherical structure is explained on the base of reduction of selenium species by hydrogen inside gas bubbles which are generated on the silicon surface by the etching process.
Drude conductivity exhibited by chemically synthesized reduced graphene oxide
NASA Astrophysics Data System (ADS)
Younas, Daniyal; Javed, Qurat-ul-Ain; Fatima, Sabeen; Kalsoom, Riffat; Abbas, Hussain; Khan, Yaqoob
2017-09-01
Electrical conductance in graphene layers having Drude like response due to massless Dirac fermions have been well explained theoretically as well as experimentally. In this paper Drude like electrical conductivity response of reduced graphene oxide synthesized by chemical route is presented. A method slightly different from conventional methods is used to synthesize graphene oxide which is then converted to reduced graphene oxide. Various analytic techniques were employed to verify the successful oxidation and reductions in the process and were also used to measure various parameters like thickness of layers and conductivity. Obtained reduced graphene oxide has very thin layers of thickness around 13 nm on average and reduced graphene oxide has average thickness below 20 nm. Conductivity of the reduced graphene was observed to have Drude like response which is explained on basis of Drude model for conductors.
On integrating Jungian and other theories.
Sedgwick, David
2015-09-01
This paper consists of reflections on some of the processes, subtleties, and 'eros' involved in attempting to integrate Jungian and other analytic perspectives. Assimilation of other theoretical viewpoints has a long history in analytical psychology, beginning when Jung met Freud. Since its inception, the Journal of Analytical Psychology has provided a forum for theoretical syntheses and comparative psychoanalysis. Such attempts at synthesizing other theories represent analytical psychology itself trying to individuate. © 2015, The Society of Analytical Psychology.
Cumulative biological impacts framework for solar energy projects in the California Desert
Davis, Frank W.; Kreitler, Jason R.; Soong, Oliver; Stoms, David M.; Dashiell, Stephanie; Hannah, Lee; Wilkinson, Whitney; Dingman, John
2013-01-01
This project developed analytical approaches, tools and geospatial data to support conservation planning for renewable energy development in the California deserts. Research focused on geographical analysis to avoid, minimize and mitigate the cumulative biological effects of utility-scale solar energy development. A hierarchical logic model was created to map the compatibility of new solar energy projects with current biological conservation values. The research indicated that the extent of compatible areas is much greater than the estimated land area required to achieve 2040 greenhouse gas reduction goals. Species distribution models were produced for 65 animal and plant species that were of potential conservation significance to the Desert Renewable Energy Conservation Plan process. These models mapped historical and projected future habitat suitability using 270 meter resolution climate grids. The results were integrated into analytical frameworks to locate potential sites for offsetting project impacts and evaluating the cumulative effects of multiple solar energy projects. Examples applying these frameworks in the Western Mojave Desert ecoregion show the potential of these publicly-available tools to assist regional planning efforts. Results also highlight the necessity to explicitly consider projected land use change and climate change when prioritizing areas for conservation and mitigation offsets. Project data, software and model results are all available online.
Acrylamide formation in different foods and potential strategies for reduction.
Stadler, Richard H
2005-01-01
This paper summarizes the progress made to date on acrylamide research pertaining to analytical methods, mechanisms of formation, and mitigation research in the major food categories. Initial difficulties with the establishment of reliable analytical methods have today in most cases been overcome, but challenges still remain in terms of the needs to develop simple and rapid test methods. Several researchers have identified that the main pathway of formation of acrylamide in foods is linked to the Maillard reaction and in particular the amino acid asparagine. Decarboxylation of the resulting Schiff base is a key step, and the reaction product may either furnish acrylamide directly or via 3-aminopropionamide. An alternative proposal is that the corresponding decarboxylated Amadori compound may release acrylamide by a beta-elimination reaction. Many experimental trials have been conducted in different foods, and a number of possible measures identified to relatively lower the amounts of acrylamide in food. The validity of laboratory trials must, however, be assessed under actual food processing conditions. Some progress in relatively lowering acrylamide in certain food categories has been achieved, but can at this stage be considered marginal. However, any options that are chosen to reduce acrylamide must be technologically feasible and also not negatively impact the quality and safety of the final product.
Shih, Tsung-Ting; Hsieh, Cheng-Chuan; Luo, Yu-Ting; Su, Yi-An; Chen, Ping-Hung; Chuang, Yu-Chen; Sun, Yuh-Chang
2016-04-15
Herein, a hyphenated system combining a high-throughput solid-phase extraction (htSPE) microchip with inductively coupled plasma-mass spectrometry (ICP-MS) for rapid determination of trace heavy metals was developed. Rather than performing multiple analyses in parallel for the enhancement of analytical throughput, we improved the processing speed for individual samples by increasing the operation flow rate during SPE procedures. To this end, an innovative device combining a micromixer and a multi-channeled extraction unit was designed. Furthermore, a programmable valve manifold was used to interface the developed microchip and ICP-MS instrumentation in order to fully automate the system, leading to a dramatic reduction in operation time and human error. Under the optimized operation conditions for the established system, detection limits of 1.64-42.54 ng L(-1) for the analyte ions were achieved. Validation procedures demonstrated that the developed method could be satisfactorily applied to the determination of trace heavy metals in natural water. Each analysis could be readily accomplished within just 186 s using the established system. This represents, to the best of our knowledge, an unprecedented speed for the analysis of trace heavy metal ions. Copyright © 2016 Elsevier B.V. All rights reserved.
LIBS analysis of artificial calcified tissues matrices.
Kasem, M A; Gonzalez, J J; Russo, R E; Harith, M A
2013-04-15
In most laser-based analytical methods, the reproducibility of quantitative measurements strongly depends on maintaining uniform and stable experimental conditions. For LIBS analysis this means that for accurate estimation of elemental concentration, using the calibration curves obtained from reference samples, the plasma parameters have to be kept as constant as possible. In addition, calcified tissues such as bone are normally less "tough" in their texture than many samples, especially metals. Thus, the ablation process could change the sample morphological features rapidly, and result in poor reproducibility statistics. In the present work, three artificial reference sample sets have been fabricated. These samples represent three different calcium based matrices, CaCO3 matrix, bone ash matrix and Ca hydroxyapatite matrix. A comparative study of UV (266 nm) and IR (1064 nm) LIBS for these three sets of samples has been performed under similar experimental conditions for the two systems (laser energy, spot size, repetition rate, irradiance, etc.) to examine the wavelength effect. The analytical results demonstrated that UV-LIBS has improved reproducibility, precision, stable plasma conditions, better linear fitting, and the reduction of matrix effects. Bone ash could be used as a suitable standard reference material for calcified tissue calibration using LIBS with a 266 nm excitation wavelength. Copyright © 2013 Elsevier B.V. All rights reserved.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Analytic and Heuristic Processing Influences on Adolescent Reasoning and Decision-Making.
ERIC Educational Resources Information Center
Klaczynski, Paul A.
2001-01-01
Examined the relationship between age and the normative/descriptive gap--the discrepancy between actual reasoning and traditional standards for reasoning. Found that middle adolescents performed closer to normative ideals than early adolescents. Factor analyses suggested that performance was based on two processing systems, analytic and heuristic…
Functional Analytic Psychotherapy for Interpersonal Process Groups: A Behavioral Application
ERIC Educational Resources Information Center
Hoekstra, Renee
2008-01-01
This paper is an adaptation of Kohlenberg and Tsai's work, Functional Analytical Psychotherapy (1991), or FAP, to group psychotherapy. This author applied a behavioral rationale for interpersonal process groups by illustrating key points with a hypothetical client. Suggestions are also provided for starting groups, identifying goals, educating…
Organics Characterization Of DWPF Alternative Reductant Simulants, Glycolic Acid, And Antifoam 747
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, T. L.; Wiedenman, B. J.; Lambert, D. P.
The present study examines the fate of glycolic acid and other organics added in the Chemical Processing Cell (CPC) of the Defense Waste Processing Facility (DWPF) as part of the glycolic alternate flowsheet. Adoption of this flowsheet is expected to provide certain benefits in terms of a reduction in the processing time, a decrease in hydrogen generation, simplification of chemical storage and handling issues, and an improvement in the processing characteristics of the waste stream including an increase in the amount of nitrate allowed in the CPC process. Understanding the fate of organics in this flowsheet is imperative because tankmore » farm waste processed in the CPC is eventually immobilized by vitrification; thus, the type and amount of organics present in the melter feed may affect optimal melt processing and the quality of the final glass product as well as alter flammability calculations on the DWPF melter off gas. To evaluate the fate of the organic compounds added as the part of the glycolic flowsheet, mainly glycolic acid and antifoam 747, samples of simulated waste that was processed using the DWPF CPC protocol for tank farm sludge feed were generated and analyzed for organic compounds using a variety of analytical techniques at the Savannah River National Laboratory (SRNL). These techniques included Ion Chromatography (IC), Gas Chromatography-Mass Spectrometry (GC-MS), Inductively Coupled Plasma-Atomic Emission Spectroscopy (ICP-AES), and Nuclear Magnetic Resonance (NMR) Spectroscopy. A set of samples were also sent to the Catholic University of America Vitreous State Laboratory (VSL) for analysis by NMR Spectroscopy at the University of Maryland, College Park. Analytical methods developed and executed at SRNL collectively showed that glycolic acid was the most prevalent organic compound in the supernatants of Slurry Mix Evaporator (SME) products examined. Furthermore, the studies suggested that commercially available glycolic acid contained minor amounts of impurities such as formic and diglycolic acid that were then carried over in the SME products. Oxalic acid present in the simulated tank farm waste was also detected. Finally, numerous other compounds, at low concentrations, were observed present in etheric extracts of aqueous supernate solutions of the SME samples and are thought to be breakdown products of antifoam 747. The data collectively suggest that although addition of glycolic acid and antifoam 747 will introduce a number of impurities and breakdown products into the melter feed, the concentrations of these organics is expected to remain low and may not significantly impact REDOX or off-gas flammability predictions. In the SME products examined presently, which contained variant amounts of glycolic acid and antifoam 747, no unexpected organic degradation product was found at concentrations above 500 mg/kg, a reasonable threshold concentration for an organic compound to be taken into account in the REDOX modeling. This statement does not include oxalic or formic acid that were sometimes observed above 500 mg/kg and acetic acid that has an analytical detection limit of 1250 mg/kg due to high glycolate concentration in the SME products tested. Once a finalized REDOX equation has been developed and implemented, REDOX properties of known organic species will be determined and their impact assessed. Although no immediate concerns arose during the study in terms of a negative impact of organics present in SME products of the glycolic flowsheet, evidence of antifoam degradation suggest that an alternative antifoam to antifoam 747 is worth considering. The determination and implementation of an antifoam that is more hydrolysis resistant would have benefits such as increasing its effectiveness over time and reducing the generation of degradation products.« less
Developmental changes in analytic and holistic processes in face perception.
Joseph, Jane E; DiBartolo, Michelle D; Bhatt, Ramesh S
2015-01-01
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2(nd) order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2(nd) order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6-8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9-11 years) showed an intermediate pattern with a trend toward holistic processing of 2(nd) order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2(nd) order and featural information are incorporated into holistic representations, whereas older children only incorporate 2(nd) order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2(nd) order processing initially then incorporates featural information by adulthood.
A device for automatic photoelectric control of the analytical gap for emission spectrographs
Dietrich, John A.; Cooley, Elmo F.; Curry, Kenneth J.
1977-01-01
A photoelectric device has been built that automatically controls the analytical gap between electrodes during excitation period. The control device allows for precise control of the analytical gap during the arcing process of samples, resulting in better precision of analysis.
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
Bringing Business Intelligence to Health Information Technology Curriculum
ERIC Educational Resources Information Center
Zheng, Guangzhi; Zhang, Chi; Li, Lei
2015-01-01
Business intelligence (BI) and healthcare analytics are the emerging technologies that provide analytical capability to help healthcare industry improve service quality, reduce cost, and manage risks. However, such component on analytical healthcare data processing is largely missed from current healthcare information technology (HIT) or health…
Reading Multimodal Texts: Perceptual, Structural and Ideological Perspectives
ERIC Educational Resources Information Center
Serafini, Frank
2010-01-01
This article presents a tripartite framework for analyzing multimodal texts. The three analytical perspectives presented include: (1) perceptual, (2) structural, and (3) ideological analytical processes. Using Anthony Browne's picturebook "Piggybook" as an example, assertions are made regarding what each analytical perspective brings to the…
An Analysis of Machine- and Human-Analytics in Classification.
Tam, Gary K L; Kothari, Vivek; Chen, Min
2017-01-01
In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.
Microfluidic-Based sample chips for radioactive solutions
Tripp, J. L.; Law, J. D.; Smith, T. E.; ...
2015-01-01
Historical nuclear fuel cycle process sampling techniques required sample volumes ranging in the tens of milliliters. The radiation levels experienced by analytical personnel and equipment, in addition to the waste volumes generated from analysis of these samples, have been significant. These sample volumes also impacted accountability inventories of required analytes during process operations. To mitigate radiation dose and other issues associated with the historically larger sample volumes, a microcapillary sample chip was chosen for further investigation. The ability to obtain microliter volume samples coupled with a remote automated means of sample loading, tracking, and transporting to the analytical instrument wouldmore » greatly improve analytical efficiency while reducing both personnel exposure and radioactive waste volumes. Sample chip testing was completed to determine the accuracy, repeatability, and issues associated with the use of microfluidic sample chips used to supply µL sample volumes of lanthanide analytes dissolved in nitric acid for introduction to an analytical instrument for elemental analysis.« less
An integrated model of clinical reasoning: dual-process theory of cognition and metacognition.
Marcum, James A
2012-10-01
Clinical reasoning is an important component for providing quality medical care. The aim of the present paper is to develop a model of clinical reasoning that integrates both the non-analytic and analytic processes of cognition, along with metacognition. The dual-process theory of cognition (system 1 non-analytic and system 2 analytic processes) and the metacognition theory are used to develop an integrated model of clinical reasoning. In the proposed model, clinical reasoning begins with system 1 processes in which the clinician assesses a patient's presenting symptoms, as well as other clinical evidence, to arrive at a differential diagnosis. Additional clinical evidence, if necessary, is acquired and analysed utilizing system 2 processes to assess the differential diagnosis, until a clinical decision is made diagnosing the patient's illness and then how best to proceed therapeutically. Importantly, the outcome of these processes feeds back, in terms of metacognition's monitoring function, either to reinforce or to alter cognitive processes, which, in turn, enhances synergistically the clinician's ability to reason quickly and accurately in future consultations. The proposed integrated model has distinct advantages over other models proposed in the literature for explicating clinical reasoning. Moreover, it has important implications for addressing the paradoxical relationship between experience and expertise, as well as for designing a curriculum to teach clinical reasoning skills. © 2012 Blackwell Publishing Ltd.
Exhaled breath condensate – from an analytical point of view
Dodig, Slavica; Čepelak, Ivana
2013-01-01
Over the past three decades, the goal of many researchers is analysis of exhaled breath condensate (EBC) as noninvasively obtained sample. A total quality in laboratory diagnostic processes in EBC analysis was investigated: pre-analytical (formation, collection, storage of EBC), analytical (sensitivity of applied methods, standardization) and post-analytical (interpretation of results) phases. EBC analysis is still used as a research tool. Limitations referred to pre-analytical, analytical, and post-analytical phases of EBC analysis are numerous, e.g. low concentrations of EBC constituents, single-analyte methods lack in sensitivity, and multi-analyte has not been fully explored, and reference values are not established. When all, pre-analytical, analytical and post-analytical requirements are met, EBC biomarkers as well as biomarker patterns can be selected and EBC analysis can hopefully be used in clinical practice, in both, the diagnosis and in the longitudinal follow-up of patients, resulting in better outcome of disease. PMID:24266297
Theiner, Sarah; Grabarics, Márkó; Galvez, Luis; Varbanov, Hristo P; Sommerfeld, Nadine S; Galanski, Markus; Keppler, Bernhard K; Koellensperger, Gunda
2018-04-17
The potential advantage of platinum(iv) complexes as alternatives to classical platinum(ii)-based drugs relies on their kinetic stability in the body before reaching the tumor site and on their activation by reduction inside cancer cells. In this study, an analytical workflow has been developed to investigate the reductive biotransformation and kinetic inertness of platinum(iv) prodrugs comprising different ligand coordination spheres (respectively, lipophilicity and redox behavior) in whole human blood. The distribution of platinum(iv) complexes in blood pellets and plasma was determined by inductively coupled plasma-mass spectrometry (ICP-MS) after microwave digestion. An analytical approach based on reversed-phase (RP)-ICP-MS was used to monitor the parent compound and the formation of metabolites using two different extraction procedures. The ligand coordination sphere of the platinum(iv) complexes had a significant impact on their accumulation in red blood cells and on their degree of kinetic inertness in whole human blood. The most lipophilic platinum(iv) compound featuring equatorial chlorido ligands showed a pronounced penetration into blood cells and a rapid reductive biotransformation. In contrast, the more hydrophilic platinum(iv) complexes with a carboplatin- and oxaliplatin-core exerted kinetic inertness on a pharmacologically relevant time scale with notable amounts of the compound accumulated in the plasma fraction.
Kumar, B Vinodh; Mohan, Thuthi
2018-01-01
Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-01-01
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-12-15
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.
Numerical and analytical simulation of the production process of ZrO2 hollow particles
NASA Astrophysics Data System (ADS)
Safaei, Hadi; Emami, Mohsen Davazdah
2017-12-01
In this paper, the production process of hollow particles from the agglomerated particles is addressed analytically and numerically. The important parameters affecting this process, in particular, the initial porosity level of particles and the plasma gun types are investigated. The analytical model adopts a combination of quasi-steady thermal equilibrium and mechanical balance. In the analytical model, the possibility of a solid core existing in agglomerated particles is examined. In this model, a range of particle diameters (50μm ≤ D_{p0} ≤ 160 μ m) and various initial porosities ( 0.2 ≤ p ≤ 0.7) are considered. The numerical model employs the VOF technique for two-phase compressible flows. The production process of hollow particles from the agglomerated particles is simulated, considering an initial diameter of D_{p0} = 60 μm and initial porosity of p = 0.3, p = 0.5, and p = 0.7. Simulation results of the analytical model indicate that the solid core diameter is independent of the initial porosity, whereas the thickness of the particle shell strongly depends on the initial porosity. In both models, a hollow particle may hardly develop at small initial porosity values ( p < 0.3), while the particle disintegrates at high initial porosity values ( p > 0.6.
Importance of implementing an analytical quality control system in a core laboratory.
Marques-Garcia, F; Garcia-Codesal, M F; Caro-Narros, M R; Contreras-SanFeliciano, T
2015-01-01
The aim of the clinical laboratory is to provide useful information for screening, diagnosis and monitoring of disease. The laboratory should ensure the quality of extra-analytical and analytical process, based on set criteria. To do this, it develops and implements a system of internal quality control, designed to detect errors, and compare its data with other laboratories, through external quality control. In this way it has a tool to detect the fulfillment of the objectives set, and in case of errors, allowing corrective actions to be made, and ensure the reliability of the results. This article sets out to describe the design and implementation of an internal quality control protocol, as well as its periodical assessment intervals (6 months) to determine compliance with pre-determined specifications (Stockholm Consensus(1)). A total of 40 biochemical and 15 immunochemical methods were evaluated using three different control materials. Next, a standard operation procedure was planned to develop a system of internal quality control that included calculating the error of the analytical process, setting quality specifications, and verifying compliance. The quality control data were then statistically depicted as means, standard deviations, and coefficients of variation, as well as systematic, random, and total errors. The quality specifications were then fixed and the operational rules to apply in the analytical process were calculated. Finally, our data were compared with those of other laboratories through an external quality assurance program. The development of an analytical quality control system is a highly structured process. This should be designed to detect errors that compromise the stability of the analytical process. The laboratory should review its quality indicators, systematic, random and total error at regular intervals, in order to ensure that they are meeting pre-determined specifications, and if not, apply the appropriate corrective actions. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Relationship between Aircraft Noise Contour Area and Noise Levels at Certification Points
NASA Technical Reports Server (NTRS)
Powell, Clemans A.
2003-01-01
The use of sound exposure level contour area reduction has been proposed as an alternative or supplemental metric of progress and success for the NASA Quiet Aircraft Technology program, which currently uses the average of predicted noise reductions at three community locations. As the program has expanded to include reductions in airframe noise as well as reduction due to optimization of operating procedures for lower noise, there is concern that the three-point methodology may not represent a fair measure of benefit to airport communities. This paper addresses several topics related to this proposal: (1) an analytical basis for a relationship between certification noise levels and noise contour areas for departure operations is developed, (2) the relationship between predicted noise contour area and the noise levels measured or predicted at the certification measurement points is examined for a wide range of commercial and business aircraft, and (3) reductions in contour area for low-noise approach scenarios are predicted and equivalent reductions in source noise are determined.
Ishibashi, Midori
2015-01-01
The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.
The effect of analytic and experiential modes of thought on moral judgment.
Kvaran, Trevor; Nichols, Shaun; Sanfey, Alan
2013-01-01
According to dual-process theories, moral judgments are the result of two competing processes: a fast, automatic, affect-driven process and a slow, deliberative, reason-based process. Accordingly, these models make clear and testable predictions about the influence of each system. Although a small number of studies have attempted to examine each process independently in the context of moral judgment, no study has yet tried to experimentally manipulate both processes within a single study. In this chapter, a well-established "mode-of-thought" priming technique was used to place participants in either an experiential/emotional or analytic mode while completing a task in which participants provide judgments about a series of moral dilemmas. We predicted that individuals primed analytically would make more utilitarian responses than control participants, while emotional priming would lead to less utilitarian responses. Support was found for both of these predictions. Implications of these findings for dual-process theories of moral judgment will be discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
Chakraborty, Kajal; Joseph, Deepu
2015-01-28
Crude Sardinella longiceps oil was refined in different stages such as degumming, neutralization, bleaching, and deodorization. The efficiency of these processes was evaluated on the basis of free fatty acid (FFA), peroxide (PV), p-anisidine (pAV), total oxidation (TOTOX), thiobarbituric acid reactive species (TBARS) values, Lovibond CIE-L*a*b* color analyses, and (1)H NMR or GC-MS experiments. The utilities of NMR-based proton signal characteristics as new analytical tools to understand the signature peaks and relative abundance of different fatty acids and monitoring the refining process of fish oil have been demonstrated. Phosphoric acid (1%) was found to be an effective degumming reagent to obtain oil with the lowest FFA, PV, pAV, TOTOX, and TBARS values and highest color reduction. Significant reduction in the contents of hydrocarbon functionalities as shown by the decrease in proton integral in the characteristic (1)H NMR region was demonstrated by using 1% H3PO4 during the course of the degumming process. A combination (1.25:3.75%) of activated charcoal and Fuller's earth at 3% concentration for a stirring time of 40 min was found to be effective in bleaching the sardine oil. This study demonstrated that unfavorable odor-causing components, particularly low molecular weight carbonyl compounds, could successfully be removed by the refining process. The alkane-dienals/alkanes, which cause unfavorable fishy odors, were successfully removed by distillation (100 °C) under vacuum with aqueous acetic acid solution (0.25 N) to obtain greater quality of refined sardine oil, a rich source of essential fatty acids and improved oxidative stability. The present study demonstrated that the four-stage refinement process of sardine oil resulted in a significant improvement in quality characteristics and nutritional values, particularly n-3 PUFAs, with improved fish oil characteristics for use in the pharmaceutical and functional food industries.
Accurate mass measurements and their appropriate use for reliable analyte identification.
Godfrey, A Ruth; Brenton, A Gareth
2012-09-01
Accurate mass instrumentation is becoming increasingly available to non-expert users. This data can be mis-used, particularly for analyte identification. Current best practice in assigning potential elemental formula for reliable analyte identification has been described with modern informatic approaches to analyte elucidation, including chemometric characterisation, data processing and searching using facilities such as the Chemical Abstracts Service (CAS) Registry and Chemspider.
Bio-organic materials in the atmosphere and snow: measurement and characterization.
Ariya, P A; Kos, G; Mortazavi, R; Hudson, E D; Kanthasamy, V; Eltouny, N; Sun, J; Wilde, C
2014-01-01
Bio-organic chemicals are ubiquitous in the Earth's atmosphere and at air-snow interfaces, as well as in aerosols and in clouds. It has been known for centuries that airborne biological matter plays various roles in the transmission of disease in humans and in ecosystems. The implication of chemical compounds of biological origins in cloud condensation and in ice nucleation processes has also been studied during the last few decades, and implications have been suggested in the reduction of visibility, in the influence on oxidative potential of the atmosphere and transformation of compounds in the atmosphere, in the formation of haze, change of snow-ice albedo, in agricultural processes, and bio-hazards and bio-terrorism. In this review we critically examine existing observation data on bio-organic compounds in the atmosphere and in snow. We also review both conventional and cutting-edge analytical techniques and methods for measurement and characterisation of bio-organic compounds and specifically for microbial communities, in the atmosphere and snow. We also explore the link between biological compounds and nucleation processes. Due to increased interest in decreasing emissions of carbon-containing compounds, we also briefly review (in an Appendix) methods and techniques that are currently deployed for bio-organic remediation.
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
Analytical Applications of NMR: Summer Symposium on Analytical Chemistry.
ERIC Educational Resources Information Center
Borman, Stuart A.
1982-01-01
Highlights a symposium on analytical applications of nuclear magnetic resonance spectroscopy (NMR), discussing pulse Fourier transformation technique, two-dimensional NMR, solid state NMR, and multinuclear NMR. Includes description of ORACLE, an NMR data processing system at Syracuse University using real-time color graphics, and algorithms for…
Features Students Really Expect from Learning Analytics
ERIC Educational Resources Information Center
Schumacher, Clara; Ifenthaler, Dirk
2016-01-01
In higher education settings more and more learning is facilitated through online learning environments. To support and understand students' learning processes better, learning analytics offers a promising approach. The purpose of this study was to investigate students' expectations toward features of learning analytics systems. In a first…
Text-based Analytics for Biosurveillance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah
The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less
Wada, Kazushige; Nittono, Hiroshi
2004-06-01
The reasoning process in the Wason selection task was examined by measuring card inspection times in the letter-number and drinking-age problems. 24 students were asked to solve the problems presented on a computer screen. Only the card touched with a mouse pointer was visible, and the total exposure time of each card was measured. Participants were allowed to cancel their previous selections at any time. Although rethinking was encouraged, the cards once selected were rarely cancelled (10% of the total selections). Moreover, most of the cancelled cards were reselected (89% of the total cancellations). Consistent with previous findings, inspection times were longer for selected cards than for nonselected cards. These results suggest that card selections are determined largely by initial heuristic processes and rarely reversed by subsequent analytic processes. The present study gives further support for the heuristic-analytic dual process theory.
Interactive Management and Updating of Spatial Data Bases
NASA Technical Reports Server (NTRS)
French, P.; Taylor, M.
1982-01-01
The decision making process, whether for power plant siting, load forecasting or energy resource planning, invariably involves a blend of analytical methods and judgement. Management decisions can be improved by the implementation of techniques which permit an increased comprehension of results from analytical models. Even where analytical procedures are not required, decisions can be aided by improving the methods used to examine spatially and temporally variant data. How the use of computer aided planning (CAP) programs and the selection of a predominant data structure, can improve the decision making process is discussed.
Improving Initiation and Tracking of Research Projects at an Academic Health Center: A Case Study.
Schmidt, Susanne; Goros, Martin; Parsons, Helen M; Saygin, Can; Wan, Hung-Da; Shireman, Paula K; Gelfond, Jonathan A L
2017-09-01
Research service cores at academic health centers are important in driving translational advancements. Specifically, biostatistics and research design units provide services and training in data analytics, biostatistics, and study design. However, the increasing demand and complexity of assigning appropriate personnel to time-sensitive projects strains existing resources, potentially decreasing productivity and increasing costs. Improving processes for project initiation, assigning appropriate personnel, and tracking time-sensitive projects can eliminate bottlenecks and utilize resources more efficiently. In this case study, we describe our application of lean six sigma principles to our biostatistics unit to establish a systematic continual process improvement cycle for intake, allocation, and tracking of research design and data analysis projects. The define, measure, analyze, improve, and control methodology was used to guide the process improvement. Our goal was to assess and improve the efficiency and effectiveness of operations by objectively measuring outcomes, automating processes, and reducing bottlenecks. As a result, we developed a web-based dashboard application to capture, track, categorize, streamline, and automate project flow. Our workflow system resulted in improved transparency, efficiency, and workload allocation. Using the dashboard application, we reduced the average study intake time from 18 to 6 days, a 66.7% reduction over 12 months (January to December 2015).
Durning, Steven J; Dong, Ting; Artino, Anthony R; van der Vleuten, Cees; Holmboe, Eric; Schuwirth, Lambert
2015-08-01
An ongoing debate exists in the medical education literature regarding the potential benefits of pattern recognition (non-analytic reasoning), actively comparing and contrasting diagnostic options (analytic reasoning) or using a combination approach. Studies have not, however, explicitly explored faculty's thought processes while tackling clinical problems through the lens of dual process theory to inform this debate. Further, these thought processes have not been studied in relation to the difficulty of the task or other potential mediating influences such as personal factors and fatigue, which could also be influenced by personal factors such as sleep deprivation. We therefore sought to determine which reasoning process(es) were used with answering clinically oriented multiple-choice questions (MCQs) and if these processes differed based on the dual process theory characteristics: accuracy, reading time and answering time as well as psychometrically determined item difficulty and sleep deprivation. We performed a think-aloud procedure to explore faculty's thought processes while taking these MCQs, coding think-aloud data based on reasoning process (analytic, nonanalytic, guessing or combination of processes) as well as word count, number of stated concepts, reading time, answering time, and accuracy. We also included questions regarding amount of work in the recent past. We then conducted statistical analyses to examine the associations between these measures such as correlations between frequencies of reasoning processes and item accuracy and difficulty. We also observed the total frequencies of different reasoning processes in the situations of getting answers correctly and incorrectly. Regardless of whether the questions were classified as 'hard' or 'easy', non-analytical reasoning led to the correct answer more often than to an incorrect answer. Significant correlations were found between self-reported recent number of hours worked with think-aloud word count and number of concepts used in the reasoning but not item accuracy. When all MCQs were included, 19 % of the variance of correctness could be explained by the frequency of expression of these three think-aloud processes (analytic, nonanalytic, or combined). We found evidence to support the notion that the difficulty of an item in a test is not a systematic feature of the item itself but is always a result of the interaction between the item and the candidate. Use of analytic reasoning did not appear to improve accuracy. Our data suggest that individuals do not apply either System 1 or System 2 but instead fall along a continuum with some individuals falling at one end of the spectrum.
Numerical model on the material circulation for coastal sediment in Ago Bay, Japan
NASA Astrophysics Data System (ADS)
Anggara Kasih, G. A.; Chiba, Satoshi; Yamagata, Youichi; Shimizu, Yasuhiro; Haraguchi, Koichi
2009-04-01
In this paper, we study the sediment in Ago Bay from the aspects of the biogeochemical cycle and the mass transport by means of a numerical model. We developed the model by adopting the basic idea of Berg et al. (Berg, P., Rysgaard, S., Thamdrup, B., 2003. Dynamic modeling of early diagenesis and nutrient cycling: A case study in Artic marine sediment. Am. J. Sci. 303, 905-955.), Fossing et al. [Fossing, H., Berg, P., Thamdrup, B., Rysgaard, S., Sorensen, H.M., Nielsen, K.A., 2004. Model set-up for an oxygen and nutrient flux for Aarhus Bay (Denmark). National Environmental Research Institute (NERI) Technical Report No. 483. Ministry of the Environment, Denmark, 65 pp.] and Sayama [Sayama, M., 2000. Analytical technique for the nitrogen circulation in the boundary layer of the coastal sediment. Isao Koike edited, Japan Environmental Management Association for Industry, Tokyo, pp. 51-103. (in Japanese)]. In the model, the biogeochemical processes involve five primary reactions and sixteen secondary reactions. The primary reactions describe the degradation of organic matters, and the secondary reactions describe the miscellaneous reactions such as re-oxidation of reduced species formed as a product from primary reactions, and the crystallizing process of oxidized particles. The transports process includes molecular diffusion, advection, bioturbation and bioirrigation. The model performance is verified by comparing the model predicted data to the observed data. The comparison involves data of vertical distribution of material concentrations and the material fluxes at the sediment-water interface. The comparison shows that the model can reproduce the observed vertical profile and the observed material fluxes at the sediment-water interface. The material circulation result shows that about 42% of dissolved organic matter (DOM) is mineralized by sulfate reduction, around 41% by oxygen respiration, and the remaining is mineralized by denitrification, manganese and iron reduction. As a result, about 47% of the O 2 taken by the sediment is directly used through bacterial oxygen respiration and 34% is used through sulfate reduction. The sensitivity study on the impact of flux change of particulate organic matter shows that 30% reduction of deposition OM flux to the sediment suppresses the oxygen consumption in the sediment from 7.3 mmol O 2/m 2 day to 5.1 mmol O 2/m 2 day.
Modeling Choice Under Uncertainty in Military Systems Analysis
1991-11-01
operators rather than fuzzy operators. This is suggested for further research. 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) In AHP , objectives, functions and...14 4.1 IMPRECISELY SPECIFIED MULTIPLE A’ITRIBUTE UTILITY THEORY... 14 4.2 FUZZY DECISION ANALYSIS...14 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) ................................... 14 4.4 SUBJECTIVE TRANSFER FUNCTION APPROACH
Optimizing an Immersion ESL Curriculum Using Analytic Hierarchy Process
ERIC Educational Resources Information Center
Tang, Hui-Wen Vivian
2011-01-01
The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative…
Understanding Customer Product Choices: A Case Study Using the Analytical Hierarchy Process
Robert L. Smith; Robert J. Bush; Daniel L. Schmoldt
1996-01-01
The Analytical Hierarchy Process (AHP) was used to characterize the bridge material selection decisions of highway officials across the United States. Understanding product choices by utilizing the AHP allowed us to develop strategies for increasing the use of timber in bridge construction. State Department of Transportation engineers, private consulting engineers, and...
Literature Review on Processing and Analytical Methods for ...
Report The purpose of this report was to survey the open literature to determine the current state of the science regarding the processing and analytical methods currently available for recovery of F. tularensis from water and soil matrices, and to determine what gaps remain in the collective knowledge concerning F. tularensis identification from environmental samples.
This paper describes a systematic method for comparing options for the long-term management of surplus elemental mercury in the U.S., using the Analytic Hierarchy Process (AHP) as embodied in commercially available Expert Choice software. A limited scope multi-criteria decision-a...
Finley, Anna J; Tang, David; Schmeichel, Brandon J
2015-01-01
Prior research has found that persons who favor more analytic modes of thought are less religious. We propose that individual differences in analytic thought are associated with reduced religious beliefs particularly when analytic thought is measured (hence, primed) first. The current study provides a direct replication of prior evidence that individual differences in analytic thinking are negatively related to religious beliefs when analytic thought is measured before religious beliefs. When religious belief is measured before analytic thinking, however, the negative relationship is reduced to non-significance, suggesting that the link between analytic thought and religious belief is more tenuous than previously reported. The current study suggests that whereas inducing analytic processing may reduce religious belief, more analytic thinkers are not necessarily less religious. The potential for measurement order to inflate the inverse correlation between analytic thinking and religious beliefs deserves additional consideration.
Finley, Anna J.; Tang, David; Schmeichel, Brandon J.
2015-01-01
Prior research has found that persons who favor more analytic modes of thought are less religious. We propose that individual differences in analytic thought are associated with reduced religious beliefs particularly when analytic thought is measured (hence, primed) first. The current study provides a direct replication of prior evidence that individual differences in analytic thinking are negatively related to religious beliefs when analytic thought is measured before religious beliefs. When religious belief is measured before analytic thinking, however, the negative relationship is reduced to non-significance, suggesting that the link between analytic thought and religious belief is more tenuous than previously reported. The current study suggests that whereas inducing analytic processing may reduce religious belief, more analytic thinkers are not necessarily less religious. The potential for measurement order to inflate the inverse correlation between analytic thinking and religious beliefs deserves additional consideration. PMID:26402334
NASA Astrophysics Data System (ADS)
Tahri, Meryem; Maanan, Mohamed; Hakdaoui, Mustapha
2016-04-01
This paper shows a method to assess the vulnerability of coastal risks such as coastal erosion or submarine applying Fuzzy Analytic Hierarchy Process (FAHP) and spatial analysis techniques with Geographic Information System (GIS). The coast of the Mohammedia located in Morocco was chosen as the study site to implement and validate the proposed framework by applying a GIS-FAHP based methodology. The coastal risk vulnerability mapping follows multi-parametric causative factors as sea level rise, significant wave height, tidal range, coastal erosion, elevation, geomorphology and distance to an urban area. The Fuzzy Analytic Hierarchy Process methodology enables the calculation of corresponding criteria weights. The result shows that the coastline of the Mohammedia is characterized by a moderate, high and very high level of vulnerability to coastal risk. The high vulnerability areas are situated in the east at Monika and Sablette beaches. This technical approach is based on the efficiency of the Geographic Information System tool based on Fuzzy Analytical Hierarchy Process to help decision maker to find optimal strategies to minimize coastal risks.
Understanding Business Analytics Success and Impact: A Qualitative Study
ERIC Educational Resources Information Center
Parks, Rachida F.; Thambusamy, Ravi
2017-01-01
Business analytics is believed to be a huge boon for organizations since it helps offer timely insights over the competition, helps optimize business processes, and helps generate growth and innovation opportunities. As organizations embark on their business analytics initiatives, many strategic questions, such as how to operationalize business…
FIRST FLOOR PLAN OF REMOTE ANALYTICAL FACILITY (CPP627) SHOWING REMOTE ...
FIRST FLOOR PLAN OF REMOTE ANALYTICAL FACILITY (CPP-627) SHOWING REMOTE ANALYTICAL LABORATORY, DECONTAMINATION ROOM, AND MULTICURIE CELL ROOM. INL DRAWING NUMBER 200-0627-00-008-105065. ALTERNATE ID NUMBER 4272-14-102. - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID
Fernández-Maestre, Roberto; Wu, Ching; Hill, Herbert H.
2013-01-01
RATIONALE When polar molecules (modifiers) are introduced into the buffer gas of an ion mobility spectrometer, most ion mobilities decrease due to the formation of ion-modifier clusters. METHODS We used ethyl lactate, nitrobenzene, 2-butanol, and tetrahydrofuran-2-carbonitrile as buffer gas modifiers and electrospray ionization ion mobility spectrometry (IMS) coupled to quadrupole mass spectrometry. Ethyl lactate, nitrobenzene, and tetrahydrofuran-2-carbonitrile had not been tested as buffer gas modifiers and 2-butanol had not been used with basic amino acids. RESULTS The ion mobilities of several diamines (arginine, histidine, lysine, and atenolol) were not affected or only slightly reduced when these modifiers were introduced into the buffer gas (3.4% average reduction in an analyte's mobility for the three modifiers). Intramolecular bridges caused limited change in the ion mobilities of diamines when modifiers were added to the buffer gas; these bridges hindered the attachment of modifier molecules to the positive charge of ions and delocalized the charge, which deterred clustering. There was also a tendency towards large changes in ion mobility when the mass of the analyte decreased; ethanolamine, the smallest compound tested, had the largest reduction in ion mobility with the introduction of modifiers into the buffer gas (61%). These differences in mobilities, together with the lack of shift in bridge-forming ions, were used to separate ions that overlapped in IMS, such as isoleucine and lysine, and arginine and phenylalanine, and made possible the prediction of separation or not of overlapping ions. CONCLUSIONS The introduction of modifiers into the buffer gas in IMS can selectively alter the mobilities of analytes to aid in compound identification and/or enable the separation of overlapping analyte peaks. PMID:22956312
Spatial indeterminacy and power sector carbon emissions accounting
NASA Astrophysics Data System (ADS)
Jiusto, J. Scott
Carbon emission indicators are essential for understanding climate change processes, and for motivating and measuring the effectiveness of carbon reduction policy at multiple scales. Carbon indicators also play an increasingly important role in shaping cultural discourses and politics about nature-society relations and the roles of the state, markets and civil society in creating sustainable natural resource practices and just societies. The analytical and political significance of indicators is tied closely to their objective basis: how accurately they account for the places, people, and processes responsible for emissions. In the electric power sector, however, power-trading across geographic boundaries prevents a simple, purely objective spatial attribution of emissions. Using U.S. states as the unit of analysis, three alternative methods of accounting for carbon emissions from electricity use are assessed, each of which is conceptually sound and methodologically rigorous, yet produces radically different estimates of individual state emissions. Each method also implicitly embodies distinctly different incentive structures for states to enact carbon reduction policies. Because none of the three methods can be said to more accurately reflect "true" emissions levels, I argue the best method is that which most encourages states to reduce emissions. Energy and carbon policy processes are highly contested, however, and thus I examine competing interests and perspectives shaping state energy policy. I explore what it means, philosophically and politically, to predicate emissions estimates on both objectively verifiable past experience and subjectively debatable policy prescriptions for the future. Although developed here at the state scale, the issues engaged and the carbon accounting methodology proposed are directly relevant to carbon analysis and policy formation at scales ranging from the local to the international.
Merging OLTP and OLAP - Back to the Future
NASA Astrophysics Data System (ADS)
Lehner, Wolfgang
When the terms "Data Warehousing" and "Online Analytical Processing" were coined in the 1990s by Kimball, Codd, and others, there was an obvious need for separating data and workload for operational transactional-style processing and decision-making implying complex analytical queries over large and historic data sets. Large data warehouse infrastructures have been set up to cope with the special requirements of analytical query answering for multiple reasons: For example, analytical thinking heavily relies on predefined navigation paths to guide the user through the data set and to provide different views on different aggregation levels.Multi-dimensional queries exploiting hierarchically structured dimensions lead to complex star queries at a relational backend, which could hardly be handled by classical relational systems.
Design and Analysis of a Preconcentrator for the ChemLab
DOE Office of Scientific and Technical Information (OSTI.GOV)
WONG,CHUNGNIN C.; FLEMMING,JEB H.; MANGINELL,RONALD P.
2000-07-17
Preconcentration is a critical analytical procedure when designing a microsystem for trace chemical detection, because it can purify a sample mixture and boost the small analyte concentration to a much higher level allowing a better analysis. This paper describes the development of a micro-fabricated planar preconcentrator for the {mu}ChemLab{trademark} at Sandia. To guide the design, an analytical model to predict the analyte transport, adsorption and resorption process in the preconcentrator has been developed. Experiments have also been conducted to analyze the adsorption and resorption process and to validate the model. This combined effort of modeling, simulation, and testing has ledmore » us to build a reliable, efficient preconcentrator with good performance.« less
Taking stock of decentralized disaster risk reduction in Indonesia
NASA Astrophysics Data System (ADS)
Grady, Anthony; Gersonius, Berry; Makarigakis, Alexandros
2016-09-01
The Sendai Framework, which outlines the global course on disaster risk reduction until 2030, places strong importance on the role of local government in disaster risk reduction. An aim of decentralization is to increase the influence and authority of local government in decision making. Yet, there is limited empirical evidence of the extent, character and effects of decentralization in current disaster risk reduction implementation, and of the barriers that are most critical to this. This paper evaluates decentralization in relation to disaster risk reduction in Indonesia, chosen for its recent actions to decentralize governance of DRR coupled with a high level of disaster risk. An analytical framework was developed to evaluate the various dimensions of decentralized disaster risk reduction, which necessitated the use of a desk study, semi-structured interviews and a gap analysis. Key barriers to implementation in Indonesia included: capacity gaps at lower institutional levels, low compliance with legislation, disconnected policies, issues in communication and coordination and inadequate resourcing. However, any of these barriers are not unique to disaster risk reduction, and similar barriers have been observed for decentralization in other developing countries in other public sectors.
NASA Astrophysics Data System (ADS)
Yang, C. C.; Yang, S. Y.; Chen, H. H.; Weng, W. L.; Horng, H. E.; Chieh, J. J.; Hong, C. Y.; Yang, H. C.
2012-07-01
By specifically bio-functionalizing magnetic nanoparticles, magnetic nanoparticles are able to label target bio-molecules. This property can be applied to quantitatively detect molecules invitro by measuring the related magnetic signals of nanoparticles bound with target molecules. One of the magnetic signals is the reduction in the mixed-frequency ac magnetic susceptibility of suspended magnetic nanoparticles due to the molecule-particle association. Many experimental results show empirically that the molecular-concentration dependent reduction in ac magnetic susceptibility follows the logistic function. In this study, it has been demonstrated that the logistic behavior is originated from the growth of particle sizes due to the molecule-particle association. The analytic relationship between the growth of particle sizes and the reduction in ac magnetic susceptibility is developed.
Merel, Sylvain; Anumol, Tarun; Park, Minkyu; Snyder, Shane A
2015-01-23
In response to water scarcity, strategies relying on multiple processes to turn wastewater effluent into potable water are being increasingly considered by many cities. In such context, the occurrence of contaminants as well as their fate during treatment processes is a major concern. Three analytical approaches where used to characterize the efficacy of UV and UV/H2O2 processes on a secondary wastewater effluent. The first analytical approach assessed bulk organic parameters or surrogates before and after treatment, while the second analytical approach measured the removal of specific indicator compounds. Sixteen trace organic contaminants were selected due to their relative high concentration and detection frequency over eight monitoring campaigns. While their removal rate ranges from approximately 10 to >90%, some of these compounds can be used to gauge process efficacy (or failure). The third analytical approach assessed the fate of unknown contaminants through high-resolution time-of-flight (TOF) mass spectrometry with advanced data processing and demonstrated the occurrence of several thousand organic compounds in the water. A heat map clearly evidenced compounds as recalcitrant or transformed by the UV processes applied. In addition, those chemicals with similar fate were grouped together into clusters to identify new indicator compounds. In this manuscript, each approach is evaluated with advantages and disadvantages compared. Copyright © 2014 Elsevier B.V. All rights reserved.
Merel, Sylvain; Anumol, Tarun; Park, Minkyu; Snyder, Shane A.
2016-01-01
In response to water scarcity, strategies relying on multiple processes to turn wastewater effluent into potable water are being increasingly considered by many cities. In such context, the occurrence of contaminants as well as their fate during treatment processes is a major concern. Three analytical approaches where used to characterize the efficacy of UV and UV/H2O2 processes on a secondary wastewater effluent. The first analytical approach assessed bulk organic parameters or surrogates before and after treatment, while the second analytical approach measured the removal of specific indicator compounds. Sixteen trace organic contaminants were selected due to their relative high concentration and detection frequency over eight monitoring campaigns. While their removal rate ranges from approximately 10 to >90%, some of these compounds can be used to gauge process efficacy (or failure). The third analytical approach assessed the fate of unknown contaminants through high-resolution time-of-flight (TOF) mass spectrometry with advanced data processing and demonstrated the occurrence of several thousand organic compounds in the water. A heat map clearly evidenced compounds as recalcitrant or transformed by the UV processes applied. In addition, those chemicals with similar fate were able to be grouped together into clusters to identify new indicator compounds. In this manuscript, each approach is evaluated with advantages and disadvantages compared. PMID:25262385
RECOMMENDATIONS FOR SAMPLING OF TANK 18 IN F TANK FARM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shine, G.
2009-12-14
Representative sampling is required for characterization of the residual floor material in Tank 18 prior to operational closure. Tank 18 is an 85-foot diameter, 34-foot high carbon steel tank with nominal operating volume of 1,300,000 gallons. It is a Type IV tank, and has been in service storing radioactive materials since 1959. Recent mechanical cleaning of the tank removed all mounds of material. Anticipating a low level of solids in the residual material, Huff and Thaxton [2009] developed a plan to sample the material during the final clean-up process while it would still be resident in sufficient quantities to supportmore » analytical determinations in four quadrants of the tank. Execution of the plan produced fewer solids than expected to support analytical determinations in all four quadrants. Huff and Thaxton [2009] then restructured the plan to characterize the residual floor material separately in the North and the South regions: two 'hemispheres.' This document provides sampling recommendations to complete the characterization of the residual material on the tank bottom following the guidance in Huff and Thaxton [2009] to split the tank floor into a North and a South hemisphere. The number of samples is determined from a modification of the formula previously published in Edwards [2001] and the sample characterization data for previous sampling of Tank 18 described by Oji [2009]. The uncertainty is quantified by an upper 95% confidence limit (UCL95%) on each analyte's mean concentration in Tank 18. The procedure computes the uncertainty in analyte concentration as a function of the number of samples, and the final number of samples is determined when the reduction in the uncertainty from an additional sample no longer has a practical impact on results. The characterization of the full suite of analytes in the North hemisphere is currently supported by a single Mantis rover sample obtained from a compact region near the center riser. A floor scrape sample was obtained from a compact region near the northeast riser and has been analyzed for a shortened list of key analytes. Since the unused portion of the floor scrape sample material is archived and available in sufficient quantity, additional analyses need to be performed to complete results for the full suite of constituents. The characterization of the full suite of analytes in the South hemisphere is currently supported by a single Mantis rover sample; there have been no floor scrape samples previously taken from the South hemisphere. The criterion to determine the number of additional samples was based on the practical reduction in the uncertainty when a new sample is added. This was achieved when five additional samples are obtained. In addition, two archived samples will be used if a contingency such as failing to demonstrate the comparability of the Mantis samples to the floor scrape samples occurs. To complete sampling of the Tank 18 residual floor material, three additional samples should be taken from the North hemisphere and four additional samples should be taken from the South hemisphere. One of the samples from each hemisphere will be archived in case of need. Two of the three additional samples from the North hemisphere and three of the four additional samples from the South hemisphere will be analyzed. Once the results are available, differences between the Mantis and three floor scrape samples (the sample previously obtained near NE riser plus the two additional samples that will be analyzed) results will be evaluated. If there are no statistically significant analyte concentration differences between the Mantis and floor scrape samples, those results will be combined and then UCL95%s will be calculated. If the analyte concentration differences between the Mantis and floor scrape samples are statistically significant, the UCL95%s will be calculated without the Mantis sample results. If further reduction in the upper confidence limits is needed and can be achieved by the addition of the archived samples, they will be analyzed and included in the statistical computations. Initially, the analyte concentrations in the residual material on the floor of Tank 18 will be determined separately in the North and the South hemispheres. However, if final sampling results show that differences between the North and South samples are consistent within sampling variation, then the final computations can be based on consolidating all sample results from the tank floor. Recommended locations may be subject to physical tank access and sampling constraints for the additional samples. The recommendations have been discussed in Section 4 and are based on partitioning the Tank 18 floor into an inner and an outer ring and six 60{sup o} sectors depicted in Figure 1. The location of the border between the inner and outer rings is based on dividing the residual material into two approximately equal volumes.« less
Hg speciation by differential photochemical vapor generation at UV-B and UV-C wavelengths
USDA-ARS?s Scientific Manuscript database
Mercury speciation was accomplished by differential photochemical reduction at two UV wavelengths; the resulting Hg(O) vapor was quantified by atomic fluorescence spectrometry. After microwave digestion and centrifugation, analyte solutions were mixed with 20% (v/v) formic acid in a reactor coil, an...
Does Fear Reactivity during Exposure Predict Panic Symptom Reduction?
ERIC Educational Resources Information Center
Meuret, Alicia E.; Seidel, Anke; Rosenfield, Benjamin; Hofmann, Stefan G.; Rosenfield, David
2012-01-01
Objective: Fear reactivity during exposure is a commonly used indicator of learning and overall therapy outcome. The objective of this study was to assess the predictive value of fear reactivity during exposure using multimodal indicators and an advanced analytical design. We also investigated the degree to which treatment condition (cognitive…
An analytical method was developed for the determination of lactic acid, formic acid, acetic acid, propionic acid, and butyric acid in environmental microcosm samples using ion-exclusion chromatography. The chromatographic behavior of various eluents was studied to determine the ...
Analytical simulation of SPS system performance, volume 3, phase 3
NASA Technical Reports Server (NTRS)
Kantak, A. V.; Lindsey, W. C.
1980-01-01
The simulation model for the Solar Power Satellite spaceantenna and the associated system imperfections are described. Overall power transfer efficiency, the key performance issue, is discussed as a function of the system imperfections. Other system performance measures discussed include average power pattern, mean beam gain reduction, and pointing error.
A review of gear housing dynamics and acoustics literature
NASA Technical Reports Server (NTRS)
Singh, Rajendra; Lim, Teik Chin
1988-01-01
A review of the available literature on gear housing vibration and noise reduction is presented. Analytical and experimental methodologies used for bearing dynamics, housing vibration and noise, mounts and suspensions, and the overall geared and housing system are discussed. Typical design guidelines as outlined by various investigators are given.
Noise transmission by viscoelastic sandwich panels
NASA Technical Reports Server (NTRS)
Vaicaitis, R.
1977-01-01
An analytical study on low frequency noise transmission into rectangular enclosures by viscoelastic sandwich panels is presented. Soft compressible cores with dilatational modes and hard incompressible cores with dilatational modes neglected are considered as limiting cases of core stiffness. It is reported that these panels can effect significant noise reduction.
This paper employs analytical and numerical general equilibrium models to examine the significance of pre-existing factor taxes for the costs of pollution reduction under a wide range of environmental policy instruments. Pre-existing taxes imply significantly ...
Developmental changes in analytic and holistic processes in face perception
Joseph, Jane E.; DiBartolo, Michelle D.; Bhatt, Ramesh S.
2015-01-01
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2nd order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2nd order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6–8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9–11 years) showed an intermediate pattern with a trend toward holistic processing of 2nd order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2nd order and featural information are incorporated into holistic representations, whereas older children only incorporate 2nd order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2nd order processing initially then incorporates featural information by adulthood. PMID:26300838
Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.
Basanta-Val, Pablo; Sánchez-Fernández, Luis
2018-06-01
The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.
Solution-Processed n-Type Graphene Doping for Cathode in Inverted Polymer Light-Emitting Diodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Sung-Joo; Han, Tae-Hee; Kim, Young-Hoon
n-Type doping with (4-(1,3-dimethyl-2,3-dihydro-1H-benzoimidazol-2-yl)phenyl) dimethylamine (N-DMBI) reduces a work function (WF) of graphene by ~0.45 eV without significant reduction of optical transmittance. Solution process of N-DMBI on graphene provides effective n-type doping effect and air-stability at the same time. Although neutral N-DMBI act as an electron receptor leaving the graphene p-doped, radical N-DMBI acts as an electron donator leaving the graphene n-doped, which is demonstrated by density functional theory. We also verify the suitability of N-DMBI-doped n-type graphene for use as a cathode in inverted polymer light-emitting diodes (PLEDs) by using various analytical methods. Inverted PLEDs using a graphene cathodemore » doped with N-DMBI radical showed dramatically improved device efficiency (~13.8 cd/A) than did inverted PLEDs with pristine graphene (~2.74 cd/A). Finally, N-DMBI-doped graphene can provide a practical way to produce graphene cathodes with low WF in various organic optoelectronics.« less