EPA’s Environmental Sampling and Analytical Methods (ESAM) is a website tool that supports the entire environmental characterization process from collection of samples all the way to their analyses.
Automated drug identification system
NASA Technical Reports Server (NTRS)
Campen, C. F., Jr.
1974-01-01
System speeds up analysis of blood and urine and is capable of identifying 100 commonly abused drugs. System includes computer that controls entire analytical process by ordering various steps in specific sequences. Computer processes data output and has readout of identified drugs.
Trends in Process Analytical Technology: Present State in Bioprocessing.
Jenzsch, Marco; Bell, Christian; Buziol, Stefan; Kepert, Felix; Wegele, Harald; Hakemeyer, Christian
2017-08-04
Process analytical technology (PAT), the regulatory initiative for incorporating quality in pharmaceutical manufacturing, is an area of intense research and interest. If PAT is effectively applied to bioprocesses, this can increase process understanding and control, and mitigate the risk from substandard drug products to both manufacturer and patient. To optimize the benefits of PAT, the entire PAT framework must be considered and each elements of PAT must be carefully selected, including sensor and analytical technology, data analysis techniques, control strategies and algorithms, and process optimization routines. This chapter discusses the current state of PAT in the biopharmaceutical industry, including several case studies demonstrating the degree of maturity of various PAT tools. Graphical Abstract Hierarchy of QbD components.
Vogeser, Michael; Spöhrer, Ute
2006-01-01
Liquid chromatography tandem-mass spectrometry (LC-MS/MS) is an efficient technology for routine determination of immunosuppressants in whole blood; however, time-consuming manual sample preparation remains a significant limitation of this technique. Using a commercially available robotic pipetting system (Tecan Freedom EVO), we developed an automated sample-preparation protocol for quantification of tacrolimus in whole blood by LC-MS/MS. Barcode reading, sample resuspension, transfer of whole blood aliquots into a deep-well plate, addition of internal standard solution, mixing, and protein precipitation by addition of an organic solvent is performed by the robotic system. After centrifugation of the plate, the deproteinized supernatants are submitted to on-line solid phase extraction, using column switching prior to LC-MS/MS analysis. The only manual actions within the entire process are decapping of the tubes, and transfer of the deep-well plate from the robotic system to a centrifuge and finally to the HPLC autosampler. Whole blood pools were used to assess the reproducibility of the entire analytical system for measuring tacrolimus concentrations. A total coefficient of variation of 1.7% was found for the entire automated analytical process (n=40; mean tacrolimus concentration, 5.3 microg/L). Close agreement between tacrolimus results obtained after manual and automated sample preparation was observed. The analytical system described here, comprising automated protein precipitation, on-line solid phase extraction and LC-MS/MS analysis, is convenient and precise, and minimizes hands-on time and the risk of mistakes in the quantification of whole blood immunosuppressant concentrations compared to conventional methods.
Lombardi, Giovanni; Sansoni, Veronica; Banfi, Giuseppe
2017-08-01
In the last few years, a growing number of molecules have been associated to an endocrine function of the skeletal muscle. Circulating myokine levels, in turn, have been associated with several pathophysiological conditions including the cardiovascular ones. However, data from different studies are often not completely comparable or even discordant. This would be due, at least in part, to the whole set of situations related to the preparation of the patient prior to blood sampling, blood sampling procedure, processing and/or store. This entire process constitutes the pre-analytical phase. The importance of the pre-analytical phase is often not considered. However, in routine diagnostics, the 70% of the errors are in this phase. Moreover, errors during the pre-analytical phase are carried over in the analytical phase and affects the final output. In research, for example, when samples are collected over a long time and by different laboratories, a standardized procedure for sample collecting and the correct procedure for sample storage are acknowledged. In this review, we discuss the pre-analytical variables potentially affecting the measurement of myokines with cardiovascular functions.
Brouckaert, Davinia; De Meyer, Laurens; Vanbillemont, Brecht; Van Bockstal, Pieter-Jan; Lammens, Joris; Mortier, Séverine; Corver, Jos; Vervaet, Chris; Nopens, Ingmar; De Beer, Thomas
2018-04-03
Near-infrared chemical imaging (NIR-CI) is an emerging tool for process monitoring because it combines the chemical selectivity of vibrational spectroscopy with spatial information. Whereas traditional near-infrared spectroscopy is an attractive technique for water content determination and solid-state investigation of lyophilized products, chemical imaging opens up possibilities for assessing the homogeneity of these critical quality attributes (CQAs) throughout the entire product. In this contribution, we aim to evaluate NIR-CI as a process analytical technology (PAT) tool for at-line inspection of continuously freeze-dried pharmaceutical unit doses based on spin freezing. The chemical images of freeze-dried mannitol samples were resolved via multivariate curve resolution, allowing us to visualize the distribution of mannitol solid forms throughout the entire cake. Second, a mannitol-sucrose formulation was lyophilized with variable drying times for inducing changes in water content. Analyzing the corresponding chemical images via principal component analysis, vial-to-vial variations as well as within-vial inhomogeneity in water content could be detected. Furthermore, a partial least-squares regression model was constructed for quantifying the water content in each pixel of the chemical images. It was hence concluded that NIR-CI is inherently a most promising PAT tool for continuously monitoring freeze-dried samples. Although some practicalities are still to be solved, this analytical technique could be applied in-line for CQA evaluation and for detecting the drying end point.
Understanding Personality Development: An Integrative State Process Model
ERIC Educational Resources Information Center
Geukes, Katharina; van Zalk, Maarten; Back, Mitja D.
2018-01-01
While personality is relatively stable over time, it is also subject to change across the entire lifespan. On a macro-analytical level, empirical research has identified patterns of normative and differential development that are affected by biological and environmental factors, specific life events, and social role investments. On a…
Interactive computer graphics system for structural sizing and analysis of aircraft structures
NASA Technical Reports Server (NTRS)
Bendavid, D.; Pipano, A.; Raibstein, A.; Somekh, E.
1975-01-01
A computerized system for preliminary sizing and analysis of aircraft wing and fuselage structures was described. The system is based upon repeated application of analytical program modules, which are interactively interfaced and sequence-controlled during the iterative design process with the aid of design-oriented graphics software modules. The entire process is initiated and controlled via low-cost interactive graphics terminals driven by a remote computer in a time-sharing mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nimbalkar, Sachin U.; Guo, Wei; Wenning, Thomas J.
Smart manufacturing and advanced data analytics can help the manufacturing sector unlock energy efficiency from the equipment level to the entire manufacturing facility and the whole supply chain. These technologies can make manufacturing industries more competitive, with intelligent communication systems, real-time energy savings, and increased energy productivity. Smart manufacturing can give all employees in an organization the actionable information they need, when they need it, so that each person can contribute to the optimal operation of the corporation through informed, data-driven decision making. This paper examines smart technologies and data analytics approaches for improving energy efficiency and reducing energy costsmore » in process-supporting energy systems. It dives into energy-saving improvement opportunities through smart manufacturing technologies and sophisticated data collection and analysis. The energy systems covered in this paper include those with motors and drives, fans, pumps, air compressors, steam, and process heating.« less
Internet-based data warehousing
NASA Astrophysics Data System (ADS)
Boreisha, Yurii
2001-10-01
In this paper, we consider the process of the data warehouse creation and population using the latest Internet and database access technologies. The logical three-tier model is applied. This approach allows developing of an enterprise schema by analyzing the various processes in the organization, and extracting the relevant entities and relationships from them. Integration with local schemas and population of the data warehouse is done through the corresponding user, business, and data services components. The hierarchy of these components is used to hide from the data warehouse users the entire complex online analytical processing functionality.
Normal families and value distribution in connection with composite functions
NASA Astrophysics Data System (ADS)
Clifford, E. F.
2005-12-01
We prove a value distribution result which has several interesting corollaries. Let , let and let f be a transcendental entire function with order less than 1/2. Then for every nonconstant entire function g, we have that (f[circle, open]g)(k)-[alpha] has infinitely many zeros. This result also holds when k=1, for every transcendental entire function g. We also prove the following result for normal families. Let , let f be a transcendental entire function with [rho](f)<1/k, and let a0,...,ak-1,a be analytic functions in a domain [Omega]. Then the family of analytic functions g such that in [Omega], is a normal family.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagoe, M.B.; Layman, T.B.
Traditional industrial approaches to biostratigraphy and paleoenvironmental analysis largely use only a small portion of the available microfossil assemblage, concentrating on various marker taxa({open_quotes}tops{close_quotes} of index fossils and paleoenvironmental guide fossils). Sequence-stratigraphic approaches may place more emphasis on the entire assemblage, but efficient analytical strategies still need to be developed to extract maximum information from micropaleontological data. Microfossil assemblages are produced by three types of processes: (1) in-situ accumulation of taxa living at the sample site; (2) postmortem transport of specimens into and out of the sample site ({open_quotes}down-slope transport{close_quotes}), and (3) taphonomic/diagenetic processes such as dissolution, which can altermore » taxon proportions. Recognizing and evaluating the effects of these processes on the microfossil assemblage can lead to a better geological interpretation. We propose an analytical strategy to address these issues, consisting of (1) bulk faunal descriptors (faunal abundance, preservation, diversity, planktic microfossil abundance) combined with lithologic information (e.g., abundance of glauconite) to identify broad paleoenvironmental patterns; (2) biofacies definition based on cluster analysis and factor analysis of the entire microfossil data set to refine these patterns; (3) interpretation and modeling of biofacies trends using detrended reciprocal averaging, and (4) analysis of faunal mixing patterns using polytopic vector analysis. We apply this analytical strategy to foraminiferal data from the middle Eocene Yegua Formation of southeast Texas. Seven biofacies are recognized along a short, three-well, dip transect, representing paleoenvironments ranging from marginal marine delta plain to outer neritic muddy shelf.« less
How do gut feelings feature in tutorial dialogues on diagnostic reasoning in GP traineeship?
Stolper, C F; Van de Wiel, M W J; Hendriks, R H M; Van Royen, P; Van Bokhoven, M A; Van der Weijden, T; Dinant, G J
2015-05-01
Diagnostic reasoning is considered to be based on the interaction between analytical and non-analytical cognitive processes. Gut feelings, a specific form of non-analytical reasoning, play a substantial role in diagnostic reasoning by general practitioners (GPs) and may activate analytical reasoning. In GP traineeships in the Netherlands, trainees mostly see patients alone but regularly consult with their supervisors to discuss patients and problems, receive feedback, and improve their competencies. In the present study, we examined the discussions of supervisors and their trainees about diagnostic reasoning in these so-called tutorial dialogues and how gut feelings feature in these discussions. 17 tutorial dialogues focussing on diagnostic reasoning were video-recorded and transcribed and the protocols were analysed using a detailed bottom-up and iterative content analysis and coding procedure. The dialogues were segmented into quotes. Each quote received a content code and a participant code. The number of words per code was used as a unit of analysis to quantitatively compare the contributions to the dialogues made by supervisors and trainees, and the attention given to different topics. The dialogues were usually analytical reflections on a trainee's diagnostic reasoning. A hypothetico-deductive strategy was often used, by listing differential diagnoses and discussing what information guided the reasoning process and might confirm or exclude provisional hypotheses. Gut feelings were discussed in seven dialogues. They were used as a tool in diagnostic reasoning, inducing analytical reflection, sometimes on the entire diagnostic reasoning process. The emphasis in these tutorial dialogues was on analytical components of diagnostic reasoning. Discussing gut feelings in tutorial dialogues seems to be a good educational method to familiarize trainees with non-analytical reasoning. Supervisors need specialised knowledge about these aspects of diagnostic reasoning and how to deal with them in medical education.
Hilliard, Mark; Alley, William R; McManus, Ciara A; Yu, Ying Qing; Hallinan, Sinead; Gebler, John; Rudd, Pauline M
Glycosylation is an important attribute of biopharmaceutical products to monitor from development through production. However, glycosylation analysis has traditionally been a time-consuming process with long sample preparation protocols and manual interpretation of the data. To address the challenges associated with glycan analysis, we developed a streamlined analytical solution that covers the entire process from sample preparation to data analysis. In this communication, we describe the complete analytical solution that begins with a simplified and fast N-linked glycan sample preparation protocol that can be completed in less than 1 hr. The sample preparation includes labelling with RapiFluor-MS tag to improve both fluorescence (FLR) and mass spectral (MS) sensitivities. Following HILIC-UPLC/FLR/MS analyses, the data are processed and a library search based on glucose units has been included to expedite the task of structural assignment. We then applied this total analytical solution to characterize the glycosylation of the NIST Reference Material mAb 8761. For this glycoprotein, we confidently identified 35 N-linked glycans and all three major classes, high mannose, complex, and hybrid, were present. The majority of the glycans were neutral and fucosylated; glycans featuring N-glycolylneuraminic acid and those with two galactoses connected via an α1,3-linkage were also identified.
Quantifying the measurement uncertainty of results from environmental analytical methods.
Moser, J; Wegscheider, W; Sperka-Gottlieb, C
2001-07-01
The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.
Unexpected Analyte Oxidation during Desorption Electrospray Ionization - Mass Spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasilis, Sofie P; Kertesz, Vilmos; Van Berkel, Gary J
2008-01-01
During the analysis of surface spotted analytes using desorption electrospray ionization mass spectrometry (DESI-MS), abundant ions are sometimes observed that appear to be the result of oxygen addition reactions. In this investigation, the effect of sample aging, the ambient lab environment, spray voltage, analyte surface concentration, and surface type on this oxidative modification of spotted analytes, exemplified by tamoxifen and reserpine, during analysis by desorption electrospray ionization mass spectrometry was studied. Simple exposure of the samples to air and to ambient lighting increased the extent of oxidation. Increased spray voltage lead also to increased analyte oxidation, possibly as a resultmore » of oxidative species formed electrochemically at the emitter electrode or in the gas - phase by discharge processes. These oxidative species are carried by the spray and impinge on and react with the sampled analyte during desorption/ionization. The relative abundance of oxidized species was more significant for analysis of deposited analyte having a relatively low surface concentration. Increasing spray solvent flow rate and addition of hydroquinone as a redox buffer to the spray solvent were found to decrease, but not entirely eliminate, analyte oxidation during analysis. The major parameters that both minimize and maximize analyte oxidation were identified and DESI-MS operational recommendations to avoid these unwanted reactions are suggested.« less
Magagna, Federico; Guglielmetti, Alessandro; Liberto, Erica; Reichenbach, Stephen E; Allegrucci, Elena; Gobino, Guido; Bicchi, Carlo; Cordero, Chiara
2017-08-02
This study investigates chemical information of volatile fractions of high-quality cocoa (Theobroma cacao L. Malvaceae) from different origins (Mexico, Ecuador, Venezuela, Columbia, Java, Trinidad, and Sao Tomè) produced for fine chocolate. This study explores the evolution of the entire pattern of volatiles in relation to cocoa processing (raw, roasted, steamed, and ground beans). Advanced chemical fingerprinting (e.g., combined untargeted and targeted fingerprinting) with comprehensive two-dimensional gas chromatography coupled with mass spectrometry allows advanced pattern recognition for classification, discrimination, and sensory-quality characterization. The entire data set is analyzed for 595 reliable two-dimensional peak regions, including 130 known analytes and 13 potent odorants. Multivariate analysis with unsupervised exploration (principal component analysis) and simple supervised discrimination methods (Fisher ratios and linear regression trees) reveal informative patterns of similarities and differences and identify characteristic compounds related to sample origin and manufacturing step.
Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil
2015-01-01
The medical curriculum is the main tool representing the entire undergraduate medical education. Due to its complexity and multilayered structure it is of limited use to teachers in medical education for quality improvement purposes. In this study we evaluated three visualizations of curriculum data from a pilot course, using teachers from an undergraduate medical program and applying visual analytics methods. We found that visual analytics can be used to positively impacting analytical reasoning and decision making in medical education through the realization of variables capable to enhance human perception and cognition on complex curriculum data. The positive results derived from our evaluation of a medical curriculum and in a small scale, signify the need to expand this method to an entire medical curriculum. As our approach sustains low levels of complexity it opens a new promising direction in medical education informatics research.
Rao, Shalinee; Masilamani, Suresh; Sundaram, Sandhya; Duvuru, Prathiba; Swaminathan, Rajendiran
2016-01-01
Quality monitoring in histopathology unit is categorized into three phases, pre-analytical, analytical and post-analytical, to cover various steps in the entire test cycle. Review of literature on quality evaluation studies pertaining to histopathology revealed that earlier reports were mainly focused on analytical aspects with limited studies on assessment of pre-analytical phase. Pre-analytical phase encompasses several processing steps and handling of specimen/sample by multiple individuals, thus allowing enough scope for errors. Due to its critical nature and limited studies in the past to assess quality in pre-analytical phase, it deserves more attention. This study was undertaken to analyse and assess the quality parameters in pre-analytical phase in a histopathology laboratory. This was a retrospective study done on pre-analytical parameters in histopathology laboratory of a tertiary care centre on 18,626 tissue specimens received in 34 months. Registers and records were checked for efficiency and errors for pre-analytical quality variables: specimen identification, specimen in appropriate fixatives, lost specimens, daily internal quality control performance on staining, performance in inter-laboratory quality assessment program {External quality assurance program (EQAS)} and evaluation of internal non-conformities (NC) for other errors. The study revealed incorrect specimen labelling in 0.04%, 0.01% and 0.01% in 2007, 2008 and 2009 respectively. About 0.04%, 0.07% and 0.18% specimens were not sent in fixatives in 2007, 2008 and 2009 respectively. There was no incidence of specimen lost. A total of 113 non-conformities were identified out of which 92.9% belonged to the pre-analytical phase. The predominant NC (any deviation from normal standard which may generate an error and result in compromising with quality standards) identified was wrong labelling of slides. Performance in EQAS for pre-analytical phase was satisfactory in 6 of 9 cycles. A low incidence of errors in pre-analytical phase implies that a satisfactory level of quality standards was being practiced with still scope for improvement.
Advances in spectroscopic methods for quantifying soil carbon
Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean
2012-01-01
The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.
Strategies for dealing with missing data in clinical trials: from design to analysis.
Dziura, James D; Post, Lori A; Zhao, Qing; Fu, Zhixuan; Peduzzi, Peter
2013-09-01
Randomized clinical trials are the gold standard for evaluating interventions as randomized assignment equalizes known and unknown characteristics between intervention groups. However, when participants miss visits, the ability to conduct an intent-to-treat analysis and draw conclusions about a causal link is compromised. As guidance to those performing clinical trials, this review is a non-technical overview of the consequences of missing data and a prescription for its treatment beyond the typical analytic approaches to the entire research process. Examples of bias from incorrect analysis with missing data and discussion of the advantages/disadvantages of analytic methods are given. As no single analysis is definitive when missing data occurs, strategies for its prevention throughout the course of a trial are presented. We aim to convey an appreciation for how missing data influences results and an understanding of the need for careful consideration of missing data during the design, planning, conduct, and analytic stages.
Analytic calculation of 1-jettiness in DIS at O (α s)
Kang, Daekyoung; Lee, Christopher; Stewart, Iain W.
2014-11-01
We present an analytic O(α s) calculation of cross sections in deep inelastic scattering (DIS) dependent on an event shape, 1-jettiness, that probes final states with one jet plus initial state radiation. This is the first entirely analytic calculation for a DIS event shape cross section at this order. We present results for the differential and cumulative 1-jettiness cross sections, and express both in terms of structure functions dependent not only on the usual DIS variables x, Q 2 but also on the 1-jettiness τ. Combined with previous results for log resummation, predictions are obtained over the entire range ofmore » the 1-jettiness distribution.« less
A Form 990 Schedule H conundrum: how much of your bad debt might be charity?
Bailey, Shari; Franklin, David; Hearle, Keith
2010-04-01
IRS Form 990 Schedule H requires hospitals to estimate the amount of bad debt expense attributable to patients eligible for charity under the hospital's charity care policy. Responses to Schedule H, Part III.A.3 open up the entire patient collection process to examination by the IRS, state officials, and the public. Using predictive analytics can help hospitals efficiently identify charity-eligible patients when answering Part III.A.3.
Effective discharge analysis of ecological processes in streams
Doyle, Martin W.; Stanley, Emily H.; Strayer, David L.; Jacobson, Robert B.; Schmidt, John C.
2005-01-01
Discharge is a master variable that controls many processes in stream ecosystems. However, there is uncertainty of which discharges are most important for driving particular ecological processes and thus how flow regime may influence entire stream ecosystems. Here the analytical method of effective discharge from fluvial geomorphology is used to analyze the interaction between frequency and magnitude of discharge events that drive organic matter transport, algal growth, nutrient retention, macroinvertebrate disturbance, and habitat availability. We quantify the ecological effective discharge using a synthesis of previously published studies and modeling from a range of study sites. An analytical expression is then developed for a particular case of ecological effective discharge and is used to explore how effective discharge varies within variable hydrologic regimes. Our results suggest that a range of discharges is important for different ecological processes in an individual stream. Discharges are not equally important; instead, effective discharge values exist that correspond to near modal flows and moderate floods for the variable sets examined. We suggest four types of ecological response to discharge variability: discharge as a transport mechanism, regulator of habitat, process modulator, and disturbance. Effective discharge analysis will perform well when there is a unique, essentially instantaneous relationship between discharge and an ecological process and poorly when effects of discharge are delayed or confounded by legacy effects. Despite some limitations the conceptual and analytical utility of the effective discharge analysis allows exploring general questions about how hydrologic variability influences various ecological processes in streams.
Camporese, Alessandro
2004-06-01
The diagnosis of infectious diseases and the role of the microbiology laboratory are currently undergoing a process of change. The need for overall efficiency in providing results is now given the same importance as accuracy. This means that laboratories must be able to produce quality results in less time with the capacity to interpret the results clinically. To improve the clinical impact of microbiology results, the new challenge facing the microbiologist has become one of process management instead of pure analysis. A proper project management process designed to improve workflow, reduce analytical time, and provide the same high quality results without losing valuable time treating the patient, has become essential. Our objective was to study the impact of introducing automation and computerization into the microbiology laboratory, and the reorganization of the laboratory workflow, i.e. scheduling personnel to work shifts covering both the entire day and the entire week. In our laboratory, the introduction of automation and computerization, as well as the reorganization of personnel, thus the workflow itself, has resulted in an improvement in response time and greater efficiency in diagnostic procedures.
2010-08-01
students conducting the data capture and data entry, an analytical method known as the Task Load Index ( NASA TLX Version 2.0) was used. This method was...published by the NASA Ames Research Center in December 2003. The entire report can be found at: http://humansystems.arc.nasa.gov/groups/ TLX The...completion of each task in the survey process, surveyors were required to complete a NASA TLX form to report their assessment of the workload for
Reconfigurable silicon thermo-optical device based on spectral tuning of ring resonators.
Fegadolli, William S; Almeida, Vilson R; Oliveira, José Edimar Barbosa
2011-06-20
A novel tunable and reconfigurable thermo-optical device is theoretically proposed and analyzed in this paper. The device is designed to be entirely compatible with CMOS process and to work as a thermo-optical filter or modulator. Numerical results, made by means of analytical and Finite-Difference Time-Domain (FDTD) methods, show that a compact device enables a broad bandwidth operation, of up to 830 GHz, which allows the device to work under a large temperature variation, of up to 96 K.
Perfect relativistic magnetohydrodynamics around black holes in horizon penetrating coordinates
NASA Astrophysics Data System (ADS)
Cherubini, Christian; Filippi, Simonetta; Loppini, Alessandro; Moradi, Rahim; Ruffini, Remo; Wang, Yu; Xue, She-Sheng
2018-03-01
Plasma accreting processes on black holes represent a central problem for relativistic astrophysics. In this context, here we specifically revisit the classical Ruffini-Wilson work developed for analytically modeling via geodesic equations the accretion of perfect magnetized plasma on a rotating Kerr black hole. Introducing the horizon penetrating coordinates found by Doran 25 years later, we revisit the entire approach studying Maxwell invariants, electric and magnetic fields, volumetric charge density and electromagnetic total energy. We finally discuss the physical implications of this analysis.
Approximate Model of Zone Sedimentation
NASA Astrophysics Data System (ADS)
Dzianik, František
2011-12-01
The process of zone sedimentation is affected by many factors that are not possible to express analytically. For this reason, the zone settling is evaluated in practice experimentally or by application of an empirical mathematical description of the process. The paper presents the development of approximate model of zone settling, i.e. the general function which should properly approximate the behaviour of the settling process within its entire range and at the various conditions. Furthermore, the specification of the model parameters by the regression analysis of settling test results is shown. The suitability of the model is reviewed by graphical dependencies and by statistical coefficients of correlation. The approximate model could by also useful on the simplification of process design of continual settling tanks and thickeners.
NASA Astrophysics Data System (ADS)
Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere
2006-02-01
To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.
Capacity Enablers and Barriers for Learning Analytics: Implications for Policy and Practice
ERIC Educational Resources Information Center
Wolf, Mary Ann; Jones, Rachel; Hall, Sara; Wise, Bob
2014-01-01
The field of learning analytics is being discussed in many circles as an emerging concept in education. In many districts and states, the core philosophy behind learning analytics is not entirely new; for more than a decade, discussions of data-driven decision making and the use of data to drive instruction have been common. Still, the U.S.…
Strategies for Dealing with Missing Data in Clinical Trials: From Design to Analysis
Dziura, James D.; Post, Lori A.; Zhao, Qing; Fu, Zhixuan; Peduzzi, Peter
2013-01-01
Randomized clinical trials are the gold standard for evaluating interventions as randomized assignment equalizes known and unknown characteristics between intervention groups. However, when participants miss visits, the ability to conduct an intent-to-treat analysis and draw conclusions about a causal link is compromised. As guidance to those performing clinical trials, this review is a non-technical overview of the consequences of missing data and a prescription for its treatment beyond the typical analytic approaches to the entire research process. Examples of bias from incorrect analysis with missing data and discussion of the advantages/disadvantages of analytic methods are given. As no single analysis is definitive when missing data occurs, strategies for its prevention throughout the course of a trial are presented. We aim to convey an appreciation for how missing data influences results and an understanding of the need for careful consideration of missing data during the design, planning, conduct, and analytic stages. PMID:24058309
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses.
Montenegro-Burke, J Rafael; Phommavongsay, Thiery; Aisporna, Aries E; Huan, Tao; Rinehart, Duane; Forsberg, Erica; Poole, Farris L; Thorgersen, Michael P; Adams, Michael W W; Krantz, Gregory; Fields, Matthew W; Northen, Trent R; Robbins, Paul D; Niedernhofer, Laura J; Lairson, Luke; Benton, H Paul; Siuzdak, Gary
2016-10-04
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
2016-01-01
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism. PMID:27560777
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
Montenegro-Burke, J. Rafael; Phommavongsay, Thiery; Aisporna, Aries E.; ...
2016-08-25
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process.more » Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.« less
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montenegro-Burke, J. Rafael; Phommavongsay, Thiery; Aisporna, Aries E.
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process.more » Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.« less
Bonte, W; Bonte, I
1989-01-01
In 1985 we reported about the usefulness of a simple home computer (here: Commodore C 64) for scientific work. This paper will demonstrate, that such an instrument also can be an appropriate tool for the entire accountancy of a medicolegal institute. Presented were self-designed programs which deal with the following matters: complication of monthly performance reports, calculation of services for clinical care, typing of analytical results and brief interpretations, typing of liquidations, clearing of proceeds from written expertises and autopsies against administration and staff.
Thermodynamics of Quantum Gases for the Entire Range of Temperature
ERIC Educational Resources Information Center
Biswas, Shyamal; Jana, Debnarayan
2012-01-01
We have analytically explored the thermodynamics of free Bose and Fermi gases for the entire range of temperature, and have extended the same for harmonically trapped cases. We have obtained approximate chemical potentials for the quantum gases in closed forms of temperature so that the thermodynamic properties of the quantum gases become…
Alon, Sigal
2015-07-01
This study demonstrates the analytical leverage gained from considering the entire college pipeline-including the application, admission and graduation stages-in examining the economic position of various groups upon labor market entry. The findings, based on data from three elite universities in Israel, reveal that the process that shapes economic inequality between different ethnic and immigrant groups is not necessarily cumulative. Field of study stratification does not expand systematically from stage to stage and the position of groups on the field of study hierarchy at each stage is not entirely explained by academic preparation. Differential selection and attrition processes, as well as ambition and aspirations, also shape the position of ethnic groups in the earnings hierarchy and generate a non-cumulative pattern. These findings suggest that a cross-sectional assessment of field of study inequality at the graduation stage can generate misleading conclusions about group-based economic inequality among workers with a bachelor's degree. Copyright © 2015 Elsevier Inc. All rights reserved.
Quality of Big Data in health care.
Sukumar, Sreenivas R; Natarajan, Ramachandran; Ferrell, Regina K
2015-01-01
The current trend in Big Data analytics and in particular health information technology is toward building sophisticated models, methods and tools for business, operational and clinical intelligence. However, the critical issue of data quality required for these models is not getting the attention it deserves. The purpose of this paper is to highlight the issues of data quality in the context of Big Data health care analytics. The insights presented in this paper are the results of analytics work that was done in different organizations on a variety of health data sets. The data sets include Medicare and Medicaid claims, provider enrollment data sets from both public and private sources, electronic health records from regional health centers accessed through partnerships with health care claims processing entities under health privacy protected guidelines. Assessment of data quality in health care has to consider: first, the entire lifecycle of health data; second, problems arising from errors and inaccuracies in the data itself; third, the source(s) and the pedigree of the data; and fourth, how the underlying purpose of data collection impact the analytic processing and knowledge expected to be derived. Automation in the form of data handling, storage, entry and processing technologies is to be viewed as a double-edged sword. At one level, automation can be a good solution, while at another level it can create a different set of data quality issues. Implementation of health care analytics with Big Data is enabled by a road map that addresses the organizational and technological aspects of data quality assurance. The value derived from the use of analytics should be the primary determinant of data quality. Based on this premise, health care enterprises embracing Big Data should have a road map for a systematic approach to data quality. Health care data quality problems can be so very specific that organizations might have to build their own custom software or data quality rule engines. Today, data quality issues are diagnosed and addressed in a piece-meal fashion. The authors recommend a data lifecycle approach and provide a road map, that is more appropriate with the dimensions of Big Data and fits different stages in the analytical workflow.
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia
An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.
Do Premarital Education Programs Really Work? A Meta-Analytic Study
ERIC Educational Resources Information Center
Fawcett, Elizabeth B.; Hawkins, Alan J.; Blanchard, Victoria L.; Carroll, Jason S.
2010-01-01
Previous studies (J. S. Carroll & W. J. Doherty, 2003) have asserted that premarital education programs have a positive effect on program participants. Using meta-analytic methods of current best practices to look across the entire body of published and unpublished evaluation research on premarital education, we found a more complex pattern of…
Big data, big knowledge: big data for personalized healthcare.
Viceconti, Marco; Hunter, Peter; Hose, Rod
2015-07-01
The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.
Collective relaxation dynamics of small-world networks
NASA Astrophysics Data System (ADS)
Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc
2015-05-01
Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N , average degree k , and topological randomness q . We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q , including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.
Collective relaxation dynamics of small-world networks.
Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc
2015-05-01
Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N, average degree k, and topological randomness q. We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q, including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.
13C-based metabolic flux analysis: fundamentals and practice.
Yang, Tae Hoon
2013-01-01
Isotope-based metabolic flux analysis is one of the emerging technologies applied to system level metabolic phenotype characterization in metabolic engineering. Among the developed approaches, (13)C-based metabolic flux analysis has been established as a standard tool and has been widely applied to quantitative pathway characterization of diverse biological systems. To implement (13)C-based metabolic flux analysis in practice, comprehending the underlying mathematical and computational modeling fundamentals is of importance along with carefully conducted experiments and analytical measurements. Such knowledge is also crucial when designing (13)C-labeling experiments and properly acquiring key data sets essential for in vivo flux analysis implementation. In this regard, the modeling fundamentals of (13)C-labeling systems and analytical data processing are the main topics we will deal with in this chapter. Along with this, the relevant numerical optimization techniques are addressed to help implementation of the entire computational procedures aiming at (13)C-based metabolic flux analysis in vivo.
Forecasting Significant Societal Events Using The Embers Streaming Predictive Analytics System
Katz, Graham; Summers, Kristen; Ackermann, Chris; Zavorin, Ilya; Lim, Zunsik; Muthiah, Sathappan; Butler, Patrick; Self, Nathan; Zhao, Liang; Lu, Chang-Tien; Khandpur, Rupinder Paul; Fayed, Youssef; Ramakrishnan, Naren
2014-01-01
Abstract Developed under the Intelligence Advanced Research Project Activity Open Source Indicators program, Early Model Based Event Recognition using Surrogates (EMBERS) is a large-scale big data analytics system for forecasting significant societal events, such as civil unrest events on the basis of continuous, automated analysis of large volumes of publicly available data. It has been operational since November 2012 and delivers approximately 50 predictions each day for countries of Latin America. EMBERS is built on a streaming, scalable, loosely coupled, shared-nothing architecture using ZeroMQ as its messaging backbone and JSON as its wire data format. It is deployed on Amazon Web Services using an entirely automated deployment process. We describe the architecture of the system, some of the design tradeoffs encountered during development, and specifics of the machine learning models underlying EMBERS. We also present a detailed prospective evaluation of EMBERS in forecasting significant societal events in the past 2 years. PMID:25553271
Forecasting Significant Societal Events Using The Embers Streaming Predictive Analytics System.
Doyle, Andy; Katz, Graham; Summers, Kristen; Ackermann, Chris; Zavorin, Ilya; Lim, Zunsik; Muthiah, Sathappan; Butler, Patrick; Self, Nathan; Zhao, Liang; Lu, Chang-Tien; Khandpur, Rupinder Paul; Fayed, Youssef; Ramakrishnan, Naren
2014-12-01
Developed under the Intelligence Advanced Research Project Activity Open Source Indicators program, Early Model Based Event Recognition using Surrogates (EMBERS) is a large-scale big data analytics system for forecasting significant societal events, such as civil unrest events on the basis of continuous, automated analysis of large volumes of publicly available data. It has been operational since November 2012 and delivers approximately 50 predictions each day for countries of Latin America. EMBERS is built on a streaming, scalable, loosely coupled, shared-nothing architecture using ZeroMQ as its messaging backbone and JSON as its wire data format. It is deployed on Amazon Web Services using an entirely automated deployment process. We describe the architecture of the system, some of the design tradeoffs encountered during development, and specifics of the machine learning models underlying EMBERS. We also present a detailed prospective evaluation of EMBERS in forecasting significant societal events in the past 2 years.
OpenHealth Platform for Interactive Contextualization of Population Health Open Data.
Almeida, Jonas S; Hajagos, Janos; Crnosija, Ivan; Kurc, Tahsin; Saltz, Mary; Saltz, Joel
The financial incentives for data science applications leading to improved health outcomes, such as DSRIP (bit.ly/dsrip), are well-aligned with the broad adoption of Open Data by State and Federal agencies. This creates entirely novel opportunities for analytical applications that make exclusive use of the pervasive Web Computing platform. The framework described here explores this new avenue to contextualize Health data in a manner that relies exclusively on the native JavaScript interpreter and data processing resources of the ubiquitous Web Browser. The OpenHealth platform is made publicly available, and is publicly hosted with version control and open source, at https://github.com/mathbiol/openHealth. The different data/analytics workflow architectures explored are accompanied with live applications ranging from DSRIP, such as Hospital Inpatient Prevention Quality Indicators at http://bit.ly/pqiSuffolk, to The Cancer Genome Atlas (TCGA) as illustrated by http://bit.ly/tcgascopeGBM.
A Versatile High-Vacuum Cryo-transfer System for Cryo-microscopy and Analytics
Tacke, Sebastian; Krzyzanek, Vladislav; Nüsse, Harald; Wepf, Roger Albert; Klingauf, Jürgen; Reichelt, Rudolf
2016-01-01
Cryogenic microscopy methods have gained increasing popularity, as they offer an unaltered view on the architecture of biological specimens. As a prerequisite, samples must be handled under cryogenic conditions below their recrystallization temperature, and contamination during sample transfer and handling must be prevented. We present a high-vacuum cryo-transfer system that streamlines the entire handling of frozen-hydrated samples from the vitrification process to low temperature imaging for scanning transmission electron microscopy and transmission electron microscopy. A template for cryo-electron microscopy and multimodal cryo-imaging approaches with numerous sample transfer steps is presented. PMID:26910419
Use of MSC/NASTRAN for the thermal analysis of the Space Shuttle Orbiter braking system
NASA Technical Reports Server (NTRS)
Shu, James; Mccann, David
1987-01-01
A description is given of the thermal modeling and analysis effort being conducted to investigate the transient temperature and thermal stress characteristics of the Space Shuttle Orbiter brake components and subsystems. Models are constructed of the brake stator as well as of the entire brake assembly to analyze the temperature distribution and thermal stress during the landing and braking process. These investigations are carried out on a UNIVAC computer system with MSC/NASTRAN Version 63. Analytical results and solution methods are presented and comparisons are made with SINDA results.
Triangular dislocation: an analytical, artefact-free solution
NASA Astrophysics Data System (ADS)
Nikkhoo, Mehdi; Walter, Thomas R.
2015-05-01
Displacements and stress-field changes associated with earthquakes, volcanoes, landslides and human activity are often simulated using numerical models in an attempt to understand the underlying processes and their governing physics. The application of elastic dislocation theory to these problems, however, may be biased because of numerical instabilities in the calculations. Here, we present a new method that is free of artefact singularities and numerical instabilities in analytical solutions for triangular dislocations (TDs) in both full-space and half-space. We apply the method to both the displacement and the stress fields. The entire 3-D Euclidean space {R}3 is divided into two complementary subspaces, in the sense that in each one, a particular analytical formulation fulfils the requirements for the ideal, artefact-free solution for a TD. The primary advantage of the presented method is that the development of our solutions involves neither numerical approximations nor series expansion methods. As a result, the final outputs are independent of the scale of the input parameters, including the size and position of the dislocation as well as its corresponding slip vector components. Our solutions are therefore well suited for application at various scales in geoscience, physics and engineering. We validate the solutions through comparison to other well-known analytical methods and provide the MATLAB codes.
Channel characteristics and coordination in three-echelon dual-channel supply chain
NASA Astrophysics Data System (ADS)
Saha, Subrata
2016-02-01
We explore the impact of channel structure on the manufacturer, the distributer, the retailer and the entire supply chain by considering three different channel structures in radiance of with and without coordination. These structures include a traditional retail channel and two manufacturer direct channels with and without consistent pricing. By comparing the performance of the manufacturer, the distributer and the retailer, and the entire supply chain in three different supply chain structures, it is established analytically that, under some conditions, a dual channel can outperform a single retail channel; as a consequence, a coordination mechanism is developed that not only coordinates the dual channel but also outperforms the non-cooperative single retail channel. All the analytical results are further analysed through numerical examples.
Naidu, Venkata Ramana; Deshpande, Rucha S; Syed, Moinuddin R; Wakte, Pravin S
2018-07-01
A direct imaging system (Eyecon TM ) was used as a Process Analytical Technology (PAT) tool to monitor fluid bed coating process. Eyecon TM generated real-time onscreen images, particle size and shape information of two identically manufactured laboratory-scale batches. Eyecon TM has accuracy of measuring the particle size increase of ±1 μm on particles in the size range of 50-3000 μm. Eyecon TM captured data every 2 s during the entire process. The moving average of D90 particle size values recorded by Eyecon TM were calculated for every 30 min to calculate the radial coating thickness of coated particles. After the completion of coating process, the radial coating thickness was found to be 11.3 and 9.11 μm, with a standard deviation of ±0.68 and 1.8 μm for Batch 1 and Batch 2, respectively. The coating thickness was also correlated with percent weight build-up by gel permeation chromatography (GPC) and dissolution. GPC indicated weight build-up of 10.6% and 9.27% for Batch 1 and Batch 2, respectively. In conclusion, weight build-up of 10% can also be correlated with 10 ± 2 μm increase in the coating thickness of pellets, indicating the potential applicability of real-time imaging as an endpoint determination tool for fluid bed coating process.
Dried Blood Spots - Preparing and Processing for Use in Immunoassays and in Molecular Techniques
Grüner, Nico; Stambouli, Oumaima; Ross, R. Stefan
2015-01-01
The idea of collecting blood on a paper card and subsequently using the dried blood spots (DBS) for diagnostic purposes originated a century ago. Since then, DBS testing for decades has remained predominantly focused on the diagnosis of infectious diseases especially in resource-limited settings or the systematic screening of newborns for inherited metabolic disorders and only recently have a variety of new and innovative DBS applications begun to emerge. For many years, pre-analytical variables were only inappropriately considered in the field of DBS testing and even today, with the exception of newborn screening, the entire pre-analytical phase, which comprises the preparation and processing of DBS for their final analysis has not been standardized. Given this background, a comprehensive step-by-step protocol, which covers al the essential phases, is proposed, i.e., collection of blood; preparation of blood spots; drying of blood spots; storage and transportation of DBS; elution of DBS, and finally analyses of DBS eluates. The effectiveness of this protocol was first evaluated with 1,762 coupled serum/DBS pairs for detecting markers of hepatitis B virus, hepatitis C virus, and human immunodeficiency virus infections on an automated analytical platform. In a second step, the protocol was utilized during a pilot study, which was conducted on active drug users in the German cities of Berlin and Essen. PMID:25867233
NASA Astrophysics Data System (ADS)
Parshin, D. A.
2017-09-01
We study the processes of additive formation of spherically shaped rigid bodies due to the uniform accretion of additional matter to their surface in an arbitrary centrally symmetric force field. A special case of such a field can be the gravitational or electrostatic force field. We consider the elastic deformation of the formed body. The body is assumed to be isotropic with elasticmoduli arbitrarily varying along the radial coordinate.We assume that arbitrary initial circular stresses can arise in the additional material added to the body in the process of its formation. In the framework of linear mechanics of growing bodies, the mathematical model of the processes under study is constructed in the quasistatic approximation. The boundary value problems describing the development of stress-strain state of the object under study before the beginning of the process and during the entire process of its formation are posed. The closed analytic solutions of the posed problems are constructed by quadratures for some general types of material inhomogeneity. Important typical characteristics of the mechanical behavior of spherical bodies additively formed in the central force field are revealed. These characteristics substantially distinguish such bodies from the already completely composed bodies similar in dimensions and properties which are placed in the force field and are described by problems of mechanics of deformable solids in the classical statement disregarding the mechanical aspects of additive processes.
NASA Astrophysics Data System (ADS)
Sheu, R.; Marcotte, A.; Khare, P.; Ditto, J.; Charan, S.; Gentner, D. R.
2017-12-01
Intermediate-volatility and semi-volatile organic compounds (I/SVOCs) are major precursors to secondary organic aerosol, and contribute to tropospheric ozone formation. Their wide volatility range, chemical complexity, behavior in analytical systems, and trace concentrations present numerous hurdles to characterization. We present an integrated sampling-to-analysis system for the collection and offline analysis of trace gas-phase organic compounds with the goal of preserving and recovering analytes throughout sample collection, transport, storage, and thermal desorption for accurate analysis. Custom multi-bed adsorbent tubes are used to collect samples for offline analysis by advanced analytical detectors. The analytical instrumentation comprises an automated thermal desorption system that introduces analytes from the adsorbent tubes into a gas chromatograph, which is coupled with an electron ionization mass spectrometer (GC-EIMS) and other detectors. In order to optimize the collection and recovery for a wide range of analyte volatility and functionalization, we evaluated a variety of commercially-available materials, including Res-Sil beads, quartz wool, glass beads, Tenax TA, and silica gel. Key properties for optimization include inertness, versatile chemical capture, minimal affinity for water, and minimal artifacts or degradation byproducts; these properties were assessed with a diverse mix of traditionally-measured and functionalized analytes. Along with a focus on material selection, we provide recommendations spanning the entire sampling-and-analysis process to improve the accuracy of future comprehensive I/SVOC measurements, including oxygenated and other functionalized I/SVOCs. We demonstrate the performance of our system by providing results on speciated VOCs-SVOCs from indoor, outdoor, and chamber studies that establish the utility of our protocols and pave the way for precise laboratory characterization via a mix of detection methods.
Archetypes as action patterns.
Hogenson, George B
2009-06-01
The discovery of mirror neurons by researchers at the University of Parma promises to radically alter our understanding of fundamental cognitive and affective states. This paper explores the relationship of mirror neurons to Jung's theory of archetypes and proposes that archetypes may be viewed as elementary action patterns. The paper begins with a review of a proposed interpretation of the fainting spells of S. Freud in his relationship with Jung as an example of an action pattern that also defines an archetypal image. The challenge that mirror neurons present to traditional views in analytical psychology and psychoanalysis, however, is that they operate without recourse to a cognitive processing element. This is a position that is gaining increasing acceptance in other fields as well. The paper therefore reviews the most recent claims made by the Boston Process of Change Study Group as well as conclusions drawn from dynamic systems views of development and theoretical robotics to underline the conclusion that unconscious agency is not a requirement for coherent action. It concludes with the suggestion that this entire body of research may lead to the conclusion that the dynamic unconscious is an unnecessary hypothesis in psychoanalysis and analytical psychology.
Evolutionary dynamics of public goods games with diverse contributions in finite populations
NASA Astrophysics Data System (ADS)
Wang, Jing; Wu, Bin; Chen, Xiaojie; Wang, Long
2010-05-01
The public goods game is a powerful metaphor for exploring the maintenance of social cooperative behavior in a group of interactional selfish players. Here we study the emergence of cooperation in the public goods games with diverse contributions in finite populations. The theory of stochastic process is innovatively adopted to investigate the evolutionary dynamics of the public goods games involving a diversity of contributions. In the limit of rare mutations, the general stationary distribution of this stochastic process can be analytically approximated by means of diffusion theory. Moreover, we demonstrate that increasing the diversity of contributions greatly reduces the probability of finding the population in a homogeneous state full of defectors. This increase also raises the expectation of the total contribution in the entire population and thus promotes social cooperation. Furthermore, by investigating the evolutionary dynamics of optional public goods games with diverse contributions, we find that nonparticipation can assist players who contribute more in resisting invasion and taking over individuals who contribute less. In addition, numerical simulations are performed to confirm our analytical results. Our results may provide insight into the effect of diverse contributions on cooperative behaviors in the real world.
A new tool for the evaluation of the analytical procedure: Green Analytical Procedure Index.
Płotka-Wasylka, J
2018-05-01
A new means for assessing analytical protocols relating to green analytical chemistry attributes has been developed. The new tool, called GAPI (Green Analytical Procedure Index), evaluates the green character of an entire analytical methodology, from sample collection to final determination, and was created using such tools as the National Environmental Methods Index (NEMI) or Analytical Eco-Scale to provide not only general but also qualitative information. In GAPI, a specific symbol with five pentagrams can be used to evaluate and quantify the environmental impact involved in each step of an analytical methodology, mainly from green through yellow to red depicting low, medium to high impact, respectively. The proposed tool was used to evaluate analytical procedures applied in the determination of biogenic amines in wine samples, and polycyclic aromatic hydrocarbon determination by EPA methods. GAPI tool not only provides an immediately perceptible perspective to the user/reader but also offers exhaustive information on evaluated procedures. Copyright © 2018 Elsevier B.V. All rights reserved.
Monte Carlo shock-like solutions to the Boltzmann equation with collective scattering
NASA Technical Reports Server (NTRS)
Ellison, D. C.; Eichler, D.
1984-01-01
The results of Monte Carlo simulations of steady state shocks generated by a collision operator that isotropizes the particles by means of elastic scattering in some locally defined frame of reference are presented. The simulations include both the back reaction of accelerated particles on the inflowing plasma and the free escape of high-energy particles from finite shocks. Energetic particles are found to be naturally extracted out of the background plasma by the shock process with an efficiency in good quantitative agreement with an earlier analytic approximation (Eichler, 1983 and 1984) and observations (Gosling et al., 1981) of the entire particle spectrum at a quasi-parallel interplanetary shock. The analytic approximation, which allows a self-consistent determination of the effective adiabatic index of the shocked gas, is used to calculate the overall acceleration efficiency and particle spectrum for cases where ultrarelativistic energies are obtained. It is found that shocks of the strength necessary to produce galactic cosmic rays put approximately 15 percent of the shock energy into relativistic particles.
Performance evaluation of the croissant production line with reparable machines
NASA Astrophysics Data System (ADS)
Tsarouhas, Panagiotis H.
2015-03-01
In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.
Miyagawa, Akihisa; Harada, Makoto; Okada, Tetsuo
2018-02-06
We present a novel analytical principle in which an analyte (according to its concentration) induces a change in the density of a microparticle, which is measured as a vertical coordinate in a coupled acoustic-gravitational (CAG) field. The density change is caused by the binding of gold nanoparticles (AuNP's) on a polystyrene (PS) microparticle through avidin-biotin association. The density of a 10-μm PS particle increases by 2% when 500 100-nm AuNP's are bound to the PS. The CAG can detect this density change as a 5-10 μm shift of the levitation coordinate of the PS. This approach, which allows us to detect 700 AuNP's bound to a PS particle, is utilized to detect biotin in solution. Biotin is detectable at a picomolar level. The reaction kinetics plays a significant role in the entire process. The kinetic aspects are also quantitatively discussed based on the levitation behavior of the PS particles in the CAG field.
Scholarly Information Extraction Is Going to Make a Quantum Leap with PubMed Central (PMC).
Matthies, Franz; Hahn, Udo
2017-01-01
With the increasing availability of complete full texts (journal articles), rather than their surrogates (titles, abstracts), as resources for text analytics, entirely new opportunities arise for information extraction and text mining from scholarly publications. Yet, we gathered evidence that a range of problems are encountered for full-text processing when biomedical text analytics simply reuse existing NLP pipelines which were developed on the basis of abstracts (rather than full texts). We conducted experiments with four different relation extraction engines all of which were top performers in previous BioNLP Event Extraction Challenges. We found that abstract-trained engines loose up to 6.6% F-score points when run on full-text data. Hence, the reuse of existing abstract-based NLP software in a full-text scenario is considered harmful because of heavy performance losses. Given the current lack of annotated full-text resources to train on, our study quantifies the price paid for this short cut.
Fey, David L.; Granitto, Matthew; Giles, Stuart A.; Smith, Steven M.; Eppinger, Robert G.; Kelley, Karen D.
2009-01-01
In the summer of 2007, the U.S. Geological Survey (USGS) began an exploration geochemical research study over the Pebble porphyry copper-gold-molybdenum deposit. This report presents the analytical data collected in 2008. The Pebble deposit is world class in size, and is almost entirely concealed by tundra, glacial deposits, and post-Cretaceous volcanic rocks. The Pebble deposit was chosen for this study because it is concealed by surficial cover rocks, is relatively undisturbed (except for exploration company drill holes), is a large mineral system, and is fairly well-constrained at depth by the drill hole geology and geochemistry. The goals of this study are to 1) determine whether the concealed deposit can be detected with surface samples, 2) better understand the processes of metal migration from the deposit to the surface, and 3) test and develop methods for assessing mineral resources in similar concealed terrains. The analytical data are presented as an integrated Microsoft Access 2003 database and as separate Excel files.
NASA Technical Reports Server (NTRS)
Straton, Jack C.
1989-01-01
The class of integrals containing the product of N 1s hydrogenic orbitals and M Coulomb or Yukawa potentials with m plane waves is investigated analytically. The results obtained by Straton (1989) are extended and generalized. It is shown that the dimensionality of the entire class can be reduced from 3m to M+N-1.
Seshadri, Preethi; Manoli, Kyriaki; Schneiderhan-Marra, Nicole; Anthes, Uwe; Wierzchowiec, Piotr; Bonrad, Klaus; Di Franco, Cinzia; Torsi, Luisa
2018-05-01
Herein a label-free immunosensor based on electrolyte-gated organic field-effect transistor (EGOFET) was developed for the detection of procalcitonin (PCT), a sepsis marker. Antibodies specific to PCT were immobilized on the poly-3-hexylthiophene (P3HT) organic semiconductor surface through direct physical adsorption followed by a post-treatment with bovine serum albumin (BSA) which served as the blocking agent to prevent non-specific adsorption. Antibodies together with BSA (forming the whole biorecognition layer) served to selectively capture the procalcitonin target analyte. The entire immunosensor fabrication process was fast, requiring overall 45min to be completed before analyte sensing. The EGOFET immunosensor showed excellent electrical properties, comparable to those of bare P3HT based EGOFET confirming reliable biosensing with bio-functional EGOFET immunosensor. The detection limit of the immunosensor was as low as 2.2pM and within a range of clinical relevance. The relative standard deviation of the individual calibration data points, measured on immunosensors fabricated on different chips (reproducibility error) was below 7%. The developed immunosensor showed high selectivity to the PCT analyte which was evident through control experiments. This report of PCT detection is first of its kind among the electronic sensors based on EGOFETs. The developed sensor is versatile and compatible with low-cost fabrication techniques. Copyright © 2017 Elsevier B.V. All rights reserved.
An explicit closed-form analytical solution for European options under the CGMY model
NASA Astrophysics Data System (ADS)
Chen, Wenting; Du, Meiyu; Xu, Xiang
2017-01-01
In this paper, we consider the analytical pricing of European path-independent options under the CGMY model, which is a particular type of pure jump Le´vy process, and agrees well with many observed properties of the real market data by allowing the diffusions and jumps to have both finite and infinite activity and variation. It is shown that, under this model, the option price is governed by a fractional partial differential equation (FPDE) with both the left-side and right-side spatial-fractional derivatives. In comparison to derivatives of integer order, fractional derivatives at a point not only involve properties of the function at that particular point, but also the information of the function in a certain subset of the entire domain of definition. This ;globalness; of the fractional derivatives has added an additional degree of difficulty when either analytical methods or numerical solutions are attempted. Albeit difficult, we still have managed to derive an explicit closed-form analytical solution for European options under the CGMY model. Based on our solution, the asymptotic behaviors of the option price and the put-call parity under the CGMY model are further discussed. Practically, a reliable numerical evaluation technique for the current formula is proposed. With the numerical results, some analyses of impacts of four key parameters of the CGMY model on European option prices are also provided.
NASA Technical Reports Server (NTRS)
Bhadra, Dipasis; Morser, Frederick R.
2006-01-01
In this paper, the authors review the FAA s current program investments and lay out a preliminary analytical framework to undertake projects that may address some of the noted deficiencies. By drawing upon the well developed theories from corporate finance, an analytical framework is offered that can be used for choosing FAA s investments taking into account risk, expected returns and inherent dependencies across NAS programs. The framework can be expanded into taking multiple assets and realistic values for parameters in drawing an efficient risk-return frontier for the entire FAA investment programs.
Stationary light pulse in solids with long-lived spin coherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Xiaojun; Wang Haihua; Wang Lei
We present a detailed analysis of stationary light pulses (SLP's) for the case of inhomogeneous broadening in both optical and spin transitions, which is normally found in solid materials with long-lived spin coherence. By solving the Langevin equations of motion for the density matrix elements under the integral over the entire range of the inhomogeneous broadenings, the necessary conditions for creating the SLP in a solid are obtained. Then the decay and diffusion processes that the SLP undergoes are analyzed. The characteristics of such processes are studied based on the analytic solution of the SLP with a slowly varying envelope.more » The dependence of SLP lifetime on inhomogeneous broadenings of spin and optical transitions, which can be regarded as the laser linewidth in the repump scheme, has been discussed.« less
Electron irradiation induced phase separation in a sodium borosilicate glass
NASA Astrophysics Data System (ADS)
Sun, K.; Wang, L. M.; Ewing, R. C.; Weber, W. J.
2004-06-01
Electron irradiation induced phase separation in a sodium borosilicate glass was studied in situ by analytical electron microscopy. Distinctly separate phases that are rich in boron and silicon formed at electron doses higher than 4.0 × 10 11 Gy during irradiation. The separated phases are still in amorphous states even at a much high dose (2.1 × 10 12 Gy). It indicates that most silicon atoms remain tetrahedrally coordinated in the glass during the entire irradiation period, except some possible reduction to amorphous silicon. The particulate B-rich phase that formed at high dose was identified as amorphous boron that may contain some oxygen. Both ballistic and ionization processes may contribute to the phase separation.
Reproducible analyses of microbial food for advanced life support systems
NASA Technical Reports Server (NTRS)
Petersen, Gene R.
1988-01-01
The use of yeasts in controlled ecological life support systems (CELSS) for microbial food regeneration in space required the accurate and reproducible analysis of intracellular carbohydrate and protein levels. The reproducible analysis of glycogen was a key element in estimating overall content of edibles in candidate yeast strains. Typical analytical methods for estimating glycogen in Saccharomyces were not found to be entirely aplicable to other candidate strains. Rigorous cell lysis coupled with acid/base fractionation followed by specific enzymatic glycogen analyses were required to obtain accurate results in two strains of Candida. A profile of edible fractions of these strains was then determined. The suitability of yeasts as food sources in CELSS food production processes is discussed.
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-09-01
The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.
Generalized hydrodynamic transport in lattice-gas automata
NASA Technical Reports Server (NTRS)
Luo, Li-Shi; Chen, Hudong; Chen, Shiyi; Doolen, Gary D.; Lee, Yee-Chun
1991-01-01
The generalized hydrodynamics of two-dimensional lattice-gas automata is solved analytically in the linearized Boltzmann approximation. The dependence of the transport coefficients (kinematic viscosity, bulk viscosity, and sound speed) upon wave number k is obtained analytically. Anisotropy of these coefficients due to the lattice symmetry is studied for the entire range of wave number, k. Boundary effects due to a finite mean free path (Knudsen layer) are analyzed, and accurate comparisons are made with lattice-gas simulations.
Analyzing large scale genomic data on the cloud with Sparkhit
Huang, Liren; Krüger, Jan
2018-01-01
Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074
Implementation of standardization in clinical practice: not always an easy task.
Panteghini, Mauro
2012-02-29
As soon as a new reference measurement system is adopted, clinical validation of correctly calibrated commercial methods should take place. Tracing back the calibration of routine assays to a reference system can actually modify the relation of analyte results to existing reference intervals and decision limits and this may invalidate some of the clinical decision-making criteria currently used. To maintain the accumulated clinical experience, the quantitative relationship to the previous calibration system should be established and, if necessary, the clinical decision-making criteria should be adjusted accordingly. The implementation of standardization should take place in a concerted action of laboratorians, manufacturers, external quality assessment scheme organizers and clinicians. Dedicated meetings with manufacturers should be organized to discuss the process of assay recalibration and studies should be performed to obtain convincing evidence that the standardization works, improving result comparability. Another important issue relates to the surveillance of the performance of standardized assays through the organization of appropriate analytical internal and external quality controls. Last but not least, uncertainty of measurement that fits for this purpose must be defined across the entire traceability chain, starting with the available reference materials, extending through the manufacturers and their processes for assignment of calibrator values and ultimately to the final result reported to clinicians by laboratories.
Large-Scale Image Analytics Using Deep Learning
NASA Astrophysics Data System (ADS)
Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.
2014-12-01
High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The AWS core components that we use to solve this problem are DynamoDB along with S3 for database query and storage, ElastiCache shared memory architecture for image segmentation, Elastic Map Reduce (EMR) for image feature extraction, and the memory optimized Elastic Cloud Compute (EC2) for the learning algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westerhout, R.W.J.; Balk, R.H.P.; Meijer, R.
1997-08-01
A screen heater with a gas sweep was developed and applied to study the pyrolysis kinetics of low density polyethene (LDPE) and polypropene (PP) at temperatures ranging from 450 to 530 C. The aim of this study was to examine the applicability of screen heaters to measure these kinetics. On-line measurement of the rate of volatiles formation using a hydrocarbon analyzer was applied to enable the determination of the conversion rate over the entire conversion range on the basis of a single experiment. Another important feature of the screen heater used in this study is the possibility to measure pyrolysismore » kinetics under nearly isothermal conditions. The kinetic constants for LDPE and PP pyrolysis were determined, using a first order model to describe the conversion rate in the 70--90% conversion range and the random chain dissociation model for the entire conversion range. In addition to the experimental work two single particle models have been developed which both incorporate a mass and a (coupled) enthalpy balance, which were used to assess the influence of internal and external heat transfer processes on the pyrolysis process. The first model assumes a variable density and constant volume during the pyrolysis process, whereas the second model assumes a constant density and a variable volume. An important feature of these models is that they can accommodate kinetic models for which no analytical representation of the pyrolysis kinetics is available.« less
The Ophidia framework: toward cloud-based data analytics for climate change
NASA Astrophysics Data System (ADS)
Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni
2015-04-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private cloud infrastructure of the CMCC Supercomputing Centre.
Is Word-Problem Solving a Form of Text Comprehension?
Fuchs, Lynn S.; Fuchs, Douglas; Compton, Donald L.; Hamlett, Carol L.; Wang, Amber Y.
2015-01-01
This study’s hypotheses were that (a) word-problem (WP) solving is a form of text comprehension that involves language comprehension processes, working memory, and reasoning, but (b) WP solving differs from other forms of text comprehension by requiring WP-specific language comprehension as well as general language comprehension. At the start of the 2nd grade, children (n = 206; on average, 7 years, 6 months) were assessed on general language comprehension, working memory, nonlinguistic reasoning, processing speed (a control variable), and foundational skill (arithmetic for WPs; word reading for text comprehension). In spring, they were assessed on WP-specific language comprehension, WPs, and text comprehension. Path analytic mediation analysis indicated that effects of general language comprehension on text comprehension were entirely direct, whereas effects of general language comprehension on WPs were partially mediated by WP-specific language. By contrast, effects of working memory and reasoning operated in parallel ways for both outcomes. PMID:25866461
Metadata-driven Clinical Data Loading into i2b2 for Clinical and Translational Science Institutes.
Post, Andrew R; Pai, Akshatha K; Willard, Richard; May, Bradley J; West, Andrew C; Agravat, Sanjay; Granite, Stephen J; Winslow, Raimond L; Stephens, David S
2016-01-01
Clinical and Translational Science Award (CTSA) recipients have a need to create research data marts from their clinical data warehouses, through research data networks and the use of i2b2 and SHRINE technologies. These data marts may have different data requirements and representations, thus necessitating separate extract, transform and load (ETL) processes for populating each mart. Maintaining duplicative procedural logic for each ETL process is onerous. We have created an entirely metadata-driven ETL process that can be customized for different data marts through separate configurations, each stored in an extension of i2b2 's ontology database schema. We extended our previously reported and open source Eureka! Clinical Analytics software with this capability. The same software has created i2b2 data marts for several projects, the largest being the nascent Accrual for Clinical Trials (ACT) network, for which it has loaded over 147 million facts about 1.2 million patients.
Metadata-driven Clinical Data Loading into i2b2 for Clinical and Translational Science Institutes
Post, Andrew R.; Pai, Akshatha K.; Willard, Richard; May, Bradley J.; West, Andrew C.; Agravat, Sanjay; Granite, Stephen J.; Winslow, Raimond L.; Stephens, David S.
2016-01-01
Clinical and Translational Science Award (CTSA) recipients have a need to create research data marts from their clinical data warehouses, through research data networks and the use of i2b2 and SHRINE technologies. These data marts may have different data requirements and representations, thus necessitating separate extract, transform and load (ETL) processes for populating each mart. Maintaining duplicative procedural logic for each ETL process is onerous. We have created an entirely metadata-driven ETL process that can be customized for different data marts through separate configurations, each stored in an extension of i2b2 ‘s ontology database schema. We extended our previously reported and open source Eureka! Clinical Analytics software with this capability. The same software has created i2b2 data marts for several projects, the largest being the nascent Accrual for Clinical Trials (ACT) network, for which it has loaded over 147 million facts about 1.2 million patients. PMID:27570667
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.; Zagidulin, Dmitri; Rauser, Richard W.
2000-01-01
Capabilities and expertise related to the development of links between nondestructive evaluation (NDE) and finite element analysis (FEA) at Glenn Research Center (GRC) are demonstrated. Current tools to analyze data produced by computed tomography (CT) scans are exercised to help assess the damage state in high temperature structural composite materials. A utility translator was written to convert velocity (an image processing software) STL data file to a suitable CAD-FEA type file. Finite element analyses are carried out with MARC, a commercial nonlinear finite element code, and the analytical results are discussed. Modeling was established by building MSC/Patran (a pre and post processing finite element package) generated model and comparing it to a model generated by Velocity in conjunction with MSC/Patran Graphics. Modeling issues and results are discussed in this paper. The entire process that outlines the tie between the data extracted via NDE and the finite element modeling and analysis is fully described.
NASA Astrophysics Data System (ADS)
Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang
2016-04-01
This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.
Exact and approximate solutions for transient squeezing flow
NASA Astrophysics Data System (ADS)
Lang, Ji; Santhanam, Sridhar; Wu, Qianhong
2017-10-01
In this paper, we report two novel theoretical approaches to examine a fast-developing flow in a thin fluid gap, which is widely observed in industrial applications and biological systems. The problem is featured by a very small Reynolds number and Strouhal number, making the fluid convective acceleration negligible, while its local acceleration is not. We have developed an exact solution for this problem which shows that the flow starts with an inviscid limit when the viscous effect has no time to appear and is followed by a subsequent developing flow, in which the viscous effect continues to penetrate into the entire fluid gap. An approximate solution is also developed using a boundary layer integral method. This solution precisely captures the general behavior of the transient fluid flow process and agrees very well with the exact solution. We also performed numerical simulation using Ansys-CFX. Excellent agreement between the analytical and the numerical solutions is obtained, indicating the validity of the analytical approaches. The study presented herein fills the gap in the literature and will have a broad impact on industrial and biomedical applications.
Liquid-absorption preconcentrator sampling instrument
Zaromb, Solomon
1990-01-01
A system for detecting trace concentrations of an analyte in air and includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container in which is disposed a wettable material extending substantially the entire length of the container. One end of the wettable material is continuously wetted with an analyte-sorbing liquid, which flows to the other end of the container. Sample air is flowed through the container in contact with the wetted material for trapping and preconcentrating the traces of analyte in the sorbing liquid, which is then collected at the other end of the container and discharged to the detector. The wetted material may be a wick comprising a bundle of fibers, one end of which is immersed in a reservoir of the analyte-sorbing liquid, or may be a liner disposed on the inner surface of the container, with the sorbing liquid being centrifugally dispersed onto the liner at one end thereof. The container is preferably vertically oriented so that gravity effects the liquid flow.
Liquid-absorption preconcentrator sampling instrument
Zaromb, S.
1990-12-11
A system is described for detecting trace concentrations of an analyte in air and includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container in which is disposed a wettable material extending substantially the entire length of the container. One end of the wettable material is continuously wetted with an analyte-sorbing liquid, which flows to the other end of the container. Sample air is flowed through the container in contact with the wetted material for trapping and preconcentrating the traces of analyte in the sorbing liquid, which is then collected at the other end of the container and discharged to the detector. The wetted material may be a wick comprising a bundle of fibers, one end of which is immersed in a reservoir of the analyte-sorbing liquid, or may be a liner disposed on the inner surface of the container, with the sorbing liquid being centrifugally dispersed onto the liner at one end thereof. The container is preferably vertically oriented so that gravity effects the liquid flow. 4 figs.
Sentiment analysis in twitter data using data analytic techniques for predictive modelling
NASA Astrophysics Data System (ADS)
Razia Sulthana, A.; Jaithunbi, A. K.; Sai Ramesh, L.
2018-04-01
Sentiment analysis refers to the task of natural language processing to determine whether a piece of text contains subjective information and the kind of subjective information it expresses. The subjective information represents the attitude behind the text: positive, negative or neutral. Understanding the opinions behind user-generated content automatically is of great concern. We have made data analysis with huge amount of tweets taken as big data and thereby classifying the polarity of words, sentences or entire documents. We use linear regression for modelling the relationship between a scalar dependent variable Y and one or more explanatory variables (or independent variables) denoted X. We conduct a series of experiments to test the performance of the system.
Explicit equilibria in a kinetic model of gambling
NASA Astrophysics Data System (ADS)
Bassetti, F.; Toscani, G.
2010-06-01
We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.
A methodology to select a wire insulation for use in habitable spacecraft.
Paulos, T; Apostolakis, G
1998-08-01
This paper investigates electrical overheating events aboard a habitable spacecraft. The wire insulation involved in these failures plays a major role in the entire event scenario from threat development to detection and damage assessment. Ideally, if models of wire overheating events in microgravity existed, the various wire insulations under consideration could be quantitatively compared. However, these models do not exist. In this paper, a methodology is developed that can be used to select a wire insulation that is best suited for use in a habitable spacecraft. The results of this study show that, based upon the Analytic Hierarchy Process and simplifying assumptions, the criteria selected, and data used in the analysis, Tefzel is better than Teflon for use in a habitable spacecraft.
Das, Moupriya
2014-12-01
The states of an overdamped Brownian particle confined in a two-dimensional bilobal enclosure are considered to correspond to two binary values: 0 (left lobe) and 1 (right lobe). An ensemble of such particles represents bits of entropic information. An external bias is applied on the particles, equally distributed in two lobes, to drive them to a particular lobe erasing one kind of bit of information. It has been shown that the average work done for the entropic memory erasure process approaches the Landauer bound for a very slow erasure cycle. Furthermore, the detailed Jarzynski equality holds to a very good extent for the erasure protocol, so that the Landauer bound may be calculated irrespective of the time period of the erasure cycle in terms of the effective free-energy change for the process. The detailed Jarzynski equality applied to two subprocesses, namely the transition from entropic memory state 0 to state 1 and the transition from entropic memory state 1 to state 1, connects the work done on the system to the probability to occupy the two states under a time-reversed process. In the entire treatment, the work appears as a boundary effect of the physical confinement of the system not having a conventional potential energy barrier. Finally, an analytical derivation of the detailed and classical Jarzynski equality for Brownian movement in confined space with varying width has been proposed. Our analytical scheme supports the numerical simulations presented in this paper.
A Theoretical and Experimental Study for a Developing Flow in a Thin Fluid Gap
NASA Astrophysics Data System (ADS)
Wu, Qianhong; Lang, Ji; Jen, Kei-Peng; Nathan, Rungun; Vucbmss Team
2016-11-01
In this paper, we report a novel theoretical and experimental approach to examine a fast developing flow in a thin fluid gap. Although the phenomena are widely observed in industrial applications and biological systems, there is a lack of analytical approach that captures the instantaneous fluid response to a sudden impact. An experimental setup was developed that contains a piston instrumented with a laser displacement sensor and a pressure transducer. A sudden impact was imposed on the piston, creating a fast compaction on the thin fluid gap underneath. The motion of the piston was captured by the laser displacement sensor, and the fluid pressure build-up and relaxation was recorded by the pressure transducer. For this dynamic process, a novel analytical approach was developed. It starts with the inviscid limit when the viscous fluid effect has no time to appear. This short process is followed by a developing flow, in which the inviscid core flow region decreases and the viscous wall region increases until the entire fluid gap is filled with viscous fluid flow. A boundary layer integral method is used during the process. Lastly, the flow is completely viscous dominant featured by a typical squeeze flow in a thin gap. Excellent agreement between the theory and the experiment was achieved. The study presented herein, filling the gap in the literature, will have broad impact in industrial and biomedical applications. This research was supported by the National Science Foundation under Award #1511096.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, J; Wang, J; Peng, J
Purpose: To implement an entire workflow quality assurance (QA) process in the radiotherapy department and to reduce the error rates of radiotherapy based on the entire workflow management in the developing country. Methods: The entire workflow QA process management starts from patient registration to the end of last treatment including all steps through the entire radiotherapy process. Error rate of chartcheck is used to evaluate the the entire workflow QA process. Two to three qualified senior medical physicists checked the documents before the first treatment fraction of every patient. Random check of the treatment history during treatment was also performed.more » A total of around 6000 patients treatment data before and after implementing the entire workflow QA process were compared from May, 2014 to December, 2015. Results: A systemic checklist was established. It mainly includes patient’s registration, treatment plan QA, information exporting to OIS(Oncology Information System), documents of treatment QAand QA of the treatment history. The error rate derived from the chart check decreases from 1.7% to 0.9% after our the entire workflow QA process. All checked errors before the first treatment fraction were corrected as soon as oncologist re-confirmed them and reinforce staff training was accordingly followed to prevent those errors. Conclusion: The entire workflow QA process improved the safety, quality of radiotherapy in our department and we consider that our QA experience can be applicable for the heavily-loaded radiotherapy departments in developing country.« less
Analytic solution of magnetic induction distribution of ideal hollow spherical field sources
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-12-01
The Halbach type hollow spherical permanent magnet arrays (HSPMA) are volume compacted, energy efficient field sources, and capable of producing multi-Tesla field in the cavity of the array, which have attracted intense interests in many practical applications. Here, we present analytical solutions of magnetic induction to the ideal HSPMA in entire space, outside of array, within the cavity of array, and in the interior of the magnet. We obtain solutions using concept of magnetic charge to solve the Poisson's and Laplace's equations for the HSPMA. Using these analytical field expressions inside the material, a scalar demagnetization function is defined to approximately indicate the regions of magnetization reversal, partial demagnetization, and inverse magnetic saturation. The analytical field solution provides deeper insight into the nature of HSPMA and offer guidance in designing optimized one.
ERIC Educational Resources Information Center
Masuda, Takahiko; Nisbett, Richard E.
2006-01-01
Research on perception and cognition suggests that whereas East Asians view the world holistically, attending to the entire field and relations among objects, Westerners view the world analytically, focusing on the attributes of salient objects. These propositions were examined in the change-blindness paradigm. Research in that paradigm finds…
ANALYTICAL METHODS FOR FUEL OXYGENATES
MTBE (and potentially any other oxygenate) may be present at any petroleum UST site, whether the release is new or old, virtually anywhere in the United States. Consequently, it is prudent to analyze samples for the entire suite of oxygenates as identified in this protocol (i.e....
Reck, Kasper; Thomsen, Erik V; Hansen, Ole
2011-01-31
The scalar wave equation, or Helmholtz equation, describes within a certain approximation the electromagnetic field distribution in a given system. In this paper we show how to solve the Helmholtz equation in complex geometries using conformal mapping and the homotopy perturbation method. The solution of the mapped Helmholtz equation is found by solving an infinite series of Poisson equations using two dimensional Fourier series. The solution is entirely based on analytical expressions and is not mesh dependent. The analytical results are compared to a numerical (finite element method) solution.
Analytical study of the liquid phase transient behavior of a high temperature heat pipe. M.S. Thesis
NASA Technical Reports Server (NTRS)
Roche, Gregory Lawrence
1988-01-01
The transient operation of the liquid phase of a high temperature heat pipe is studied. The study was conducted in support of advanced heat pipe applications that require reliable transport of high temperature drops and significant distances under a broad spectrum of operating conditions. The heat pipe configuration studied consists of a sealed cylindrical enclosure containing a capillary wick structure and sodium working fluid. The wick is an annular flow channel configuration formed between the enclosure interior wall and a concentric cylindrical tube of fine pore screen. The study approach is analytical through the solution of the governing equations. The energy equation is solved over the pipe wall and liquid region using the finite difference Peaceman-Rachford alternating direction implicit numerical method. The continuity and momentum equations are solved over the liquid region by the integral method. The energy equation and liquid dynamics equation are tightly coupled due to the phase change process at the liquid-vapor interface. A kinetic theory model is used to define the phase change process in terms of the temperature jump between the liquid-vapor surface and the bulk vapor. Extensive auxiliary relations, including sodium properties as functions of temperature, are used to close the analytical system. The solution procedure is implemented in a FORTRAN algorithm with some optimization features to take advantage of the IBM System/370 Model 3090 vectorization facility. The code was intended for coupling to a vapor phase algorithm so that the entire heat pipe problem could be solved. As a test of code capabilities, the vapor phase was approximated in a simple manner.
de Paula, Joelma Abadia Marciano; Brito, Lucas Ferreira; Caetano, Karen Lorena Ferreira Neves; de Morais Rodrigues, Mariana Cristina; Borges, Leonardo Luiz; da Conceição, Edemilson Cardoso
2016-01-01
Azadirachta indica A. Juss., also known as neem, is a Meliaceae family tree from India. It is globally known for the insecticidal properties of its limonoid tetranortriterpenoid derivatives, such as azadirachtin. This work aimed to optimize the azadirachtin ultrasound-assisted extraction (UAE) and validate the HPLC-PDA analytical method for the measurement of this marker in neem dried fruit extracts. Box-Behnken design and response surface methodology (RSM) were used to investigate the effect of process variables on the UAE. Three independent variables, including ethanol concentration (%, w/w), temperature (°C), and material-to-solvent ratio (gmL(-1)), were studied. The azadirachtin content (µgmL(-1)), i.e., dependent variable, was quantified by the HPLC-PDA analytical method. Isocratic reversed-phase chromatography was performed using acetonitrile/water (40:60), a flow of 1.0mLmin(-1), detection at 214nm, and C18 column (250×4.6mm(2), 5µm). The primary validation parameters were determined according to ICH guidelines and Brazilian legislation. The results demonstrated that the optimal UAE condition was obtained with ethanol concentration range of 75-80% (w/w), temperature of 30°C, and material-to-solvent ratio of 0.55gmL(-1). The HPLC-PDA analytical method proved to be simple, selective, linear, precise, accurate and robust. The experimental values of azadirachtin content under optimal UAE conditions were in good agreement with the RSM predicted values and were superior to the azadirachtin content of percolated extract. Such findings suggest that UAE is a more efficient extractive process in addition to being simple, fast, and inexpensive. Copyright © 2015 Elsevier B.V. All rights reserved.
A conceptual network model of the air transportation system. the basic level 1 model.
DOT National Transportation Integrated Search
1971-04-01
A basic conceptual model of the entire Air Transportation System is being developed to serve as an analytical tool for studying the interactions among the system elements. The model is being designed to function in an interactive computer graphics en...
Fulga, Netta
2013-06-01
Quality management and accreditation in the analytical laboratory setting are developing rapidly and becoming the standard worldwide. Quality management refers to all the activities used by organizations to ensure product or service consistency. Accreditation is a formal recognition by an authoritative regulatory body that a laboratory is competent to perform examinations and report results. The Motherisk Drug Testing Laboratory is licensed to operate at the Hospital for Sick Children in Toronto, Ontario. The laboratory performs toxicology tests of hair and meconium samples for research and clinical purposes. Most of the samples are involved in a chain of custody cases. Establishing a quality management system and achieving accreditation became mandatory by legislation for all Ontario clinical laboratories since 2003. The Ontario Laboratory Accreditation program is based on International Organization for Standardization 15189-Medical laboratories-Particular requirements for quality and competence, an international standard that has been adopted as a national standard in Canada. The implementation of a quality management system involves management commitment, planning and staff education, documentation of the system, validation of processes, and assessment against the requirements. The maintenance of a quality management system requires control and monitoring of the entire laboratory path of workflow. The process of transformation of a research/clinical laboratory into an accredited laboratory, and the benefits of maintaining an effective quality management system, are presented in this article.
Chemical composition of Hanford Tank SY-102
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birnbaum, E.; Agnew, S.; Jarvinen, G.
1993-12-01
The US Department of Energy established the Tank Waste Remediation System (TWRS) to safely manage and dispose of the radioactive waste, both current and future, stored in double-shell and single-shell tanks at the Hanford sites. One major program element in TWRS is pretreatment which was established to process the waste prior to disposal using the Hanford Waste Vitrification Plant. In support of this program, Los Alamos National Laboratory has developed a conceptual process flow sheet which will remediate the entire contents of a selected double-shelled underground waste tank, including supernatant and sludge, into forms that allow storage and final disposalmore » in a safe, cost-effective and environmentally sound manner. The specific tank selected for remediation is 241-SY-102 located in the 200 West Area. As part of the flow sheet development effort, the composition of the tank was defined and documented. This database was built by examining the history of liquid waste transfers to the tank and by performing careful analysis of all of the analytical data that have been gathered during the tank`s lifetime. In order to more completely understand the variances in analytical results, material and charge balances were done to help define the chemistry of the various components in the tank. This methodology of defining the tank composition and the final results are documented in this report.« less
Hunter, Adam; Dayalan, Saravanan; De Souza, David; Power, Brad; Lorrimar, Rodney; Szabo, Tamas; Nguyen, Thu; O'Callaghan, Sean; Hack, Jeremy; Pyke, James; Nahid, Amsha; Barrero, Roberto; Roessner, Ute; Likic, Vladimir; Tull, Dedreia; Bacic, Antony; McConville, Malcolm; Bellgard, Matthew
2017-01-01
An increasing number of research laboratories and core analytical facilities around the world are developing high throughput metabolomic analytical and data processing pipelines that are capable of handling hundreds to thousands of individual samples per year, often over multiple projects, collaborations and sample types. At present, there are no Laboratory Information Management Systems (LIMS) that are specifically tailored for metabolomics laboratories that are capable of tracking samples and associated metadata from the beginning to the end of an experiment, including data processing and archiving, and which are also suitable for use in large institutional core facilities or multi-laboratory consortia as well as single laboratory environments. Here we present MASTR-MS, a downloadable and installable LIMS solution that can be deployed either within a single laboratory or used to link workflows across a multisite network. It comprises a Node Management System that can be used to link and manage projects across one or multiple collaborating laboratories; a User Management System which defines different user groups and privileges of users; a Quote Management System where client quotes are managed; a Project Management System in which metadata is stored and all aspects of project management, including experimental setup, sample tracking and instrument analysis, are defined, and a Data Management System that allows the automatic capture and storage of raw and processed data from the analytical instruments to the LIMS. MASTR-MS is a comprehensive LIMS solution specifically designed for metabolomics. It captures the entire lifecycle of a sample starting from project and experiment design to sample analysis, data capture and storage. It acts as an electronic notebook, facilitating project management within a single laboratory or a multi-node collaborative environment. This software is being developed in close consultation with members of the metabolomics research community. It is freely available under the GNU GPL v3 licence and can be accessed from, https://muccg.github.io/mastr-ms/.
NASA Astrophysics Data System (ADS)
Vujacic, Dusko; Barovic, Goran; Mijanovic, Dragica; Spalevic, Velibor; Curovic, Milic; Tanaskovic, Vjekoslav; Djurovic, Nevenka
2016-04-01
The objective of this research was to study soil erosion processes in one of Northern Montenegrin watersheds, the Krivacki Potok Watershed of the Polimlje River Basin, using modeling techniques: the River Basins computer-graphic model, based on the analytical Erosion Potential Method (EPM) of Gavrilovic for calculation of runoff and soil loss. Our findings indicate a low potential of soil erosion risk, with 554 m³ yr-1 of annual sediment yield; an area-specific sediment yield of 180 m³km-2 yr-1. The calculation outcomes were validated for the entire 57 River Basins of Polimlje, through measurements of lake sediment deposition at the Potpec hydropower plant dam. According to our analysis, the Krivacki Potok drainage basin is with the relatively low sediment discharge; according to the erosion type, it is mixed erosion. The value of the Z coefficient was calculated on 0.297, what indicates that the river basin belongs to 4th destruction category (of five). The calculated peak discharge from the river basin was 73 m3s-1 for the incidence of 100 years and there is a possibility for large flood waves to appear in the studied river basin. Using the adequate computer-graphic and analytical modeling tools, we improved the knowledge on the soil erosion processes of the river basins of this part of Montenegro. The computer-graphic River Basins model of Spalevic, which is based on the EPM analytical method of Gavrilovic, is highly recommended for soil erosion modelling in other river basins of the Southeastern Europe. This is because of its reliable detection and appropriate classification of the areas affected by the soil loss caused by soil erosion, at the same time taking into consideration interactions between the various environmental elements such as Physical-Geographical Features, Climate, Geological, Pedological characteristics, including the analysis of Land Use, all calculated at the catchment scale.
NM-Scale Anatomy of an Entire Stardust Carrot Track
NASA Technical Reports Server (NTRS)
Nakamura-Messenger, K.; Keller, L. P.; Clemett, S. J.; Messenger, S.
2009-01-01
Comet Wild-2 samples collected by NASA s Stardust mission are extremely complex, heterogeneous, and have experienced wide ranges of alteration during the capture process. There are two major types of track morphologies: "carrot" and "bulbous," that reflect different structural/compositional properties of the impactors. Carrot type tracks are typically produced by compact or single mineral grains which survive essentially intact as a single large terminal particle. Bulbous tracks are likely produced by fine-grained or organic-rich impactors [1]. Owing to their challenging nature and especially high value of Stardust samples, we have invested considerable effort in developing both sample preparation and analytical techniques tailored for Stardust sample analyses. Our report focuses on our systematic disassembly and coordinated analysis of Stardust carrot track #112 from the mm to nm-scale.
Unfolding the laws of star formation: the density distribution of molecular clouds.
Kainulainen, Jouni; Federrath, Christoph; Henning, Thomas
2014-04-11
The formation of stars shapes the structure and evolution of entire galaxies. The rate and efficiency of this process are affected substantially by the density structure of the individual molecular clouds in which stars form. The most fundamental measure of this structure is the probability density function of volume densities (ρ-PDF), which determines the star formation rates predicted with analytical models. This function has remained unconstrained by observations. We have developed an approach to quantify ρ-PDFs and establish their relation to star formation. The ρ-PDFs instigate a density threshold of star formation and allow us to quantify the star formation efficiency above it. The ρ-PDFs provide new constraints for star formation theories and correctly predict several key properties of the star-forming interstellar medium.
A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1997-01-01
This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.
NASA Astrophysics Data System (ADS)
Xing, Zhang-Fan; Greenberg, J. M.
1992-11-01
Results of an investigation of the analyticity of the complex extinction efficiency Q-tilde(ext) in different parameter domains are presented. In the size parameter domain, x = omega(a/c), numerical Hilbert transforms are used to study the analyticity properties of Q-tilde(ext) for homogeneous spheres. Q-tilde(ext) is found to be analytic in the entire lower complex x-tilde-plane when the refractive index, m, is fixed as a real constant (pure scattering) or infinity (perfect conductor); poles, however, appear in the left side of the lower complex x-tilde-plane as m becomes complex. The computation of the mean extinction produced by an extended size distribution of particles may be conveniently and accurately approximated using only a few values of the complex extinction evaluated in the complex plane.
Nanometer-scale anatomy of entire Stardust tracks
NASA Astrophysics Data System (ADS)
Nakamura-Messenger, Keiko; Keller, Lindsay P.; Clemett, Simon J.; Messenger, Scott; Ito, Motoo
2011-07-01
We have developed new sample preparation and analytical techniques tailored for entire aerogel tracks of Wild 2 sample analyses both on "carrot" and "bulbous" tracks. We have successfully ultramicrotomed an entire track along its axis while preserving its original shape. This innovation allowed us to examine the distribution of fragments along the entire track from the entrance hole all the way to the terminal particle. The crystalline silicates we measured have Mg-rich compositions and O isotopic compositions in the range of meteoritic materials, implying that they originated in the inner solar system. The terminal particle of the carrot track is a 16O-rich forsteritic grain that may have formed in a similar environment as Ca-, Al-rich inclusions and amoeboid olivine aggregates in primitive carbonaceous chondrites. The track also contains submicron-sized diamond grains likely formed in the solar system. Complex aromatic hydrocarbons distributed along aerogel tracks and in terminal particles. These organics are likely cometary but affected by shock heating.
Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian
2016-10-27
To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads.
Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian
2016-01-01
To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads. PMID:27801794
Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein
2015-04-08
The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.
Chemoviscosity modeling for thermosetting resins - I
NASA Technical Reports Server (NTRS)
Hou, T. H.
1984-01-01
A new analytical model for chemoviscosity variation during cure of thermosetting resins was developed. This model is derived by modifying the widely used WLF (Williams-Landel-Ferry) Theory in polymer rheology. Major assumptions involved are that the rate of reaction is diffusion controlled and is linearly inversely proportional to the viscosity of the medium over the entire cure cycle. The resultant first order nonlinear differential equation is solved numerically, and the model predictions compare favorably with experimental data of EPON 828/Agent U obtained on a Rheometrics System 4 Rheometer. The model describes chemoviscosity up to a range of six orders of magnitude under isothermal curing conditions. The extremely non-linear chemoviscosity profile for a dynamic heating cure cycle is predicted as well. The model is also shown to predict changes of glass transition temperature for the thermosetting resin during cure. The physical significance of this prediction is unclear at the present time, however, and further research is required. From the chemoviscosity simulation point of view, the technique of establishing an analytical model as described here is easily applied to any thermosetting resin. The model thus obtained is used in real-time process controls for fabricating composite materials.
The "hospital central laboratory": automation, integration and clinical usefulness.
Zaninotto, Martina; Plebani, Mario
2010-07-01
Recent technological developments in laboratory medicine have led to a major challenge, maintaining a close connection between the search of efficiency through automation and consolidation and the assurance of effectiveness. The adoption of systems that automate most of the manual tasks characterizing routine activities has significantly improved the quality of laboratory performance; total laboratory automation being the paradigm of the idea that "human-less" robotic laboratories may allow for better operation and insuring less human errors. Furthermore, even if ongoing technological developments have considerably improved the productivity of clinical laboratories as well as reducing the turnaround time of the entire process, the value of qualified personnel remains a significant issue. Recent evidence confirms that automation allows clinical laboratories to improve analytical performances only if trained staff operate in accordance with well-defined standard operative procedures, thus assuring continuous monitoring of the analytical quality. In addition, laboratory automation may improve the appropriateness of test requests through the use of algorithms and reflex testing. This should allow the adoption of clinical and biochemical guidelines. In conclusion, in laboratory medicine, technology represents a tool for improving clinical effectiveness and patient outcomes, but it has to be managed by qualified laboratory professionals.
NASA Astrophysics Data System (ADS)
Bogdanov, Valery L.; Boyce-Jacino, Michael
1999-05-01
Confined arrays of biochemical probes deposited on a solid support surface (analytical microarray or 'chip') provide an opportunity to analysis multiple reactions simultaneously. Microarrays are increasingly used in genetics, medicine and environment scanning as research and analytical instruments. A power of microarray technology comes from its parallelism which grows with array miniaturization, minimization of reagent volume per reaction site and reaction multiplexing. An optical detector of microarray signals should combine high sensitivity, spatial and spectral resolution. Additionally, low-cost and a high processing rate are needed to transfer microarray technology into biomedical practice. We designed an imager that provides confocal and complete spectrum detection of entire fluorescently-labeled microarray in parallel. Imager uses microlens array, non-slit spectral decomposer, and high- sensitive detector (cooled CCD). Two imaging channels provide a simultaneous detection of localization, integrated and spectral intensities for each reaction site in microarray. A dimensional matching between microarray and imager's optics eliminates all in moving parts in instrumentation, enabling highly informative, fast and low-cost microarray detection. We report theory of confocal hyperspectral imaging with microlenses array and experimental data for implementation of developed imager to detect fluorescently labeled microarray with a density approximately 103 sites per cm2.
Medical and Healthcare Curriculum Exploratory Analysis.
Komenda, Martin; Karolyi, Matěj; Pokorná, Andrea; Vaitsis, Christos
2017-01-01
In the recent years, medical and healthcare higher education institutions compile their curricula in different ways in order to cover all necessary topics and sections that the students will need to go through to success in their future clinical practice. A medical and healthcare curriculum consists of many descriptive parameters, which define statements of what, when, and how students will learn in the course of their studies. For the purpose of understanding a complicated medical and healthcare curriculum structure, we have developed a web-oriented platform for curriculum management covering in detail formal metadata specifications in accordance with the approved pedagogical background, namely outcome-based approach. Our platform provides a rich database that can be used for innovative detailed educational data analysis. In this contribution we would like to present how we used a proven process model as a way of increasing accuracy in solving individual analytical tasks with the available data. Moreover, we introduce an innovative approach on how to explore a dataset in accordance with the selected methodology. The achieved results from the selected analytical issues are presented here in clear visual interpretations in an attempt to visually describe the entire medical and healthcare curriculum.
Huff, Mark J; Bodner, Glen E; Fawcett, Jonathan M
2015-04-01
We review and meta-analyze how distinctive encoding alters encoding and retrieval processes and, thus, affects correct and false recognition in the Deese-Roediger-McDermott (DRM) paradigm. Reductions in false recognition following distinctive encoding (e.g., generation), relative to a nondistinctive read-only control condition, reflected both impoverished relational encoding and use of a retrieval-based distinctiveness heuristic. Additional analyses evaluated the costs and benefits of distinctive encoding in within-subjects designs relative to between-group designs. Correct recognition was design independent, but in a within design, distinctive encoding was less effective at reducing false recognition for distinctively encoded lists but more effective for nondistinctively encoded lists. Thus, distinctive encoding is not entirely "cost free" in a within design. In addition to delineating the conditions that modulate the effects of distinctive encoding on recognition accuracy, we discuss the utility of using signal detection indices of memory information and memory monitoring at test to separate encoding and retrieval processes.
NASA Astrophysics Data System (ADS)
Czuba, Jonathan A.; Foufoula-Georgiou, Efi; Gran, Karen B.; Belmont, Patrick; Wilcock, Peter R.
2017-05-01
Understanding how sediment moves along source to sink pathways through watersheds—from hillslopes to channels and in and out of floodplains—is a fundamental problem in geomorphology. We contribute to advancing this understanding by modeling the transport and in-channel storage dynamics of bed material sediment on a river network over a 600 year time period. Specifically, we present spatiotemporal changes in bed sediment thickness along an entire river network to elucidate how river networks organize and process sediment supply. We apply our model to sand transport in the agricultural Greater Blue Earth River Basin in Minnesota. By casting the arrival of sediment to links of the network as a Poisson process, we derive analytically (under supply-limited conditions) the time-averaged probability distribution function of bed sediment thickness for each link of the river network for any spatial distribution of inputs. Under transport-limited conditions, the analytical assumptions of the Poisson arrival process are violated (due to in-channel storage dynamics) where we find large fluctuations and periodicity in the time series of bed sediment thickness. The time series of bed sediment thickness is the result of dynamics on a network in propagating, altering, and amalgamating sediment inputs in sometimes unexpected ways. One key insight gleaned from the model is that there can be a small fraction of reaches with relatively low-transport capacity within a nonequilibrium river network acting as "bottlenecks" that control sediment to downstream reaches, whereby fluctuations in bed elevation can dissociate from signals in sediment supply.
Theory and Circuit Model for Lossy Coaxial Transmission Line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genoni, T. C.; Anderson, C. N.; Clark, R. E.
2017-04-01
The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.
Development of an integrated BEM for hot fluid-structure interaction
NASA Technical Reports Server (NTRS)
Banerjee, P. K.; Dargush, G. F.
1989-01-01
The Boundary Element Method (BEM) is chosen as a basic analysis tool principally because the definition of quantities like fluxes, temperature, displacements, and velocities is very precise on a boundary base discretization scheme. One fundamental difficulty is, of course, that the entire analysis requires a very considerable amount of analytical work which is not present in other numerical methods. During the last 18 months all of this analytical work was completed and a two-dimensional, general purpose code was written. Some of the early results are described. It is anticipated that within the next two to three months almost all two-dimensional idealizations will be examined. It should be noted that the analytical work for the three-dimensional case has also been done and numerical implementation will begin next year.
Epilepsy analytic system with cloud computing.
Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei
2013-01-01
Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.
Distributed adaptive diagnosis of sensor faults using structural response data
NASA Astrophysics Data System (ADS)
Dragos, Kosmas; Smarsly, Kay
2016-10-01
The reliability and consistency of wireless structural health monitoring (SHM) systems can be compromised by sensor faults, leading to miscalibrations, corrupted data, or even data loss. Several research approaches towards fault diagnosis, referred to as ‘analytical redundancy’, have been proposed that analyze the correlations between different sensor outputs. In wireless SHM, most analytical redundancy approaches require centralized data storage on a server for data analysis, while other approaches exploit the on-board computing capabilities of wireless sensor nodes, analyzing the raw sensor data directly on board. However, using raw sensor data poses an operational constraint due to the limited power resources of wireless sensor nodes. In this paper, a new distributed autonomous approach towards sensor fault diagnosis based on processed structural response data is presented. The inherent correlations among Fourier amplitudes of acceleration response data, at peaks corresponding to the eigenfrequencies of the structure, are used for diagnosis of abnormal sensor outputs at a given structural condition. Representing an entirely data-driven analytical redundancy approach that does not require any a priori knowledge of the monitored structure or of the SHM system, artificial neural networks (ANN) are embedded into the sensor nodes enabling cooperative fault diagnosis in a fully decentralized manner. The distributed analytical redundancy approach is implemented into a wireless SHM system and validated in laboratory experiments, demonstrating the ability of wireless sensor nodes to self-diagnose sensor faults accurately and efficiently with minimal data traffic. Besides enabling distributed autonomous fault diagnosis, the embedded ANNs are able to adapt to the actual condition of the structure, thus ensuring accurate and efficient fault diagnosis even in case of structural changes.
ERIC Educational Resources Information Center
Steel, Piers
2007-01-01
Procrastination is a prevalent and pernicious form of self-regulatory failure that is not entirely understood. Hence, the relevant conceptual, theoretical, and empirical work is reviewed, drawing upon correlational, experimental, and qualitative findings. A meta-analysis of procrastination's possible causes and effects, based on 691 correlations,…
To date, studies on the environmental behaviour of aggregated aqueous fullerene nanomaterials have used the entire size distribution of fullerene aggregates and do not distinguish between different aggregate size classes. This is a direct result of the lack of analytical methods ...
New optical and radio frequency angular tropospheric refraction models for deep space applications
NASA Technical Reports Server (NTRS)
Berman, A. L.; Rockwell, S. T.
1976-01-01
The development of angular tropospheric refraction models for optical and radio frequency usage is presented. The models are compact analytic functions, finite over the entire domain of elevation angle, and accurate over large ranges of pressure, temperature, and relative humidity. Additionally, FORTRAN subroutines for each of the models are included.
Modelling Coastal Cliff Recession Based on the GIM-DDD Method
NASA Astrophysics Data System (ADS)
Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an
2018-04-01
The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.
Mistakes in a stat laboratory: types and frequency.
Plebani, M; Carraro, P
1997-08-01
Application of Total Quality Management concepts to laboratory testing requires that the total process, including preanalytical and postanalytical phases, be managed so as to reduce or, ideally, eliminate all defects within the process itself. Indeed a "mistake" can be defined as any defect during the entire testing process, from ordering tests to reporting results. We evaluated the frequency and types of mistakes found in the "stat" section of the Department of Laboratory Medicine of the University-Hospital of Padova by monitoring four different departments (internal medicine, nephrology, surgery, and intensive care unit) for 3 months. Among a total of 40490 analyses, we identified 189 laboratory mistakes, a relative frequency of 0.47%. The distribution of mistakes was: preanalytical 68.2%, analytical 13.3%, and postanalytical 18.5%. Most of the laboratory mistakes (74%) did not affect patients' outcome. However, in 37 patients (19%), laboratory mistakes were associated with further inappropriate investigations, thus resulting in an unjustifiable increase in costs. Moreover, in 12 patients (6.4%) laboratory mistakes were associated with inappropriate care or inappropriate modification of therapy. The promotion of quality control and continuous improvement of the total testing process, including pre- and postanalytical phases, seems to be a prerequisite for an effective laboratory service.
NASA Astrophysics Data System (ADS)
Urban, Matthias; Möller, Robert; Fritzsche, Wolfgang
2003-02-01
DNA analytics is a growing field based on the increasing knowledge about the genome with special implications for the understanding of molecular bases for diseases. Driven by the need for cost-effective and high-throughput methods for molecular detection, DNA chips are an interesting alternative to more traditional analytical methods in this field. The standard readout principle for DNA chips is fluorescence based. Fluorescence is highly sensitive and broadly established, but shows limitations regarding quantification (due to signal and/or dye instability) and the need for sophisticated (and therefore high-cost) equipment. This article introduces a readout system for an alternative detection scheme based on electrical detection of nanoparticle-labeled DNA. If labeled DNA is present in the analyte solution, it will bind on complementary capture DNA immobilized in a microelectrode gap. A subsequent metal enhancement step leads to a deposition of conductive material on the nanoparticles, and finally an electrical contact between the electrodes. This detection scheme offers the potential for a simple (low-cost as well as robust) and highly miniaturizable method, which could be well-suited for point-of-care applications in the context of lab-on-a-chip technologies. The demonstrated apparatus allows a parallel readout of an entire array of microstructured measurement sites. The readout is combined with data-processing by an embedded personal computer, resulting in an autonomous instrument that measures and presents the results. The design and realization of such a system is described, and first measurements are presented.
Page, Trevor; Dubina, Henry; Fillipi, Gabriele; Guidat, Roland; Patnaik, Saroj; Poechlauer, Peter; Shering, Phil; Guinn, Martin; Mcdonnell, Peter; Johnston, Craig
2015-03-01
This white paper focuses on equipment, and analytical manufacturers' perspectives, regarding the challenges of continuous pharmaceutical manufacturing across five prompt questions. In addition to valued input from several vendors, commentary was provided from experienced pharmaceutical representatives, who have installed various continuous platforms. Additionally, a small medium enterprise (SME) perspective was obtained through interviews. A range of technical challenges is outlined, including: the presence of particles, equipment scalability, fouling (and cleaning), technology derisking, specific analytical challenges, and the general requirement of improved technical training. Equipment and analytical companies can make a significant contribution to help the introduction of continuous technology. A key point is that many of these challenges exist in batch processing and are not specific to continuous processing. Backward compatibility of software is not a continuous issue per se. In many cases, there is available learning from other industries. Business models and opportunities through outsourced development partners are also highlighted. Agile smaller companies and academic groups have a key role to play in developing skills, working collaboratively in partnerships, and focusing on solving relevant industry challenges. The precompetitive space differs for vendor companies compared with large pharmaceuticals. Currently, there is no strong consensus around a dominant continuous design, partly because of business dynamics and commercial interests. A more structured common approach to process design and hardware and software standardization would be beneficial, with initial practical steps in modeling. Conclusions include a digestible systems approach, accessible and published business cases, and increased user, academic, and supplier collaboration. This mirrors US FDA direction. The concept of silos in pharmaceutical companies is a common theme throughout the white papers. In the equipment domain, this is equally prevalent among a broad range of companies, mainly focusing on discrete areas. As an example, the flow chemistry and secondary drug product communities are almost entirely disconnected. Control and Process Analytical Technologies (PAT) companies are active in both domains. The equipment actors are a very diverse group with a few major Original Equipment Manufacturers (OEM) players and a variety of SME, project providers, integrators, upstream downstream providers, and specialist PAT. In some cases, partnerships or alliances are formed to increase critical mass. This white paper has focused on small molecules; equipment associated with biopharmaceuticals is covered in a separate white paper. More specifics on equipment detail are provided in final dosage form and drug substance white papers. The equipment and analytical development from laboratory to pilot to production is important, with a variety of sensors and complexity reducing with scale. The importance of robust processing rather than overcomplex control strategy mitigation is important. A search of nonacademic literature highlights, with a few notable exceptions, a relative paucity of material. Much focuses on the economics and benefits of continuous, rather than specifics of equipment issues. The disruptive nature of continuous manufacturing represents either an opportunity or a threat for many companies, so the incentive to change equipment varies. Also, for many companies, the pharmaceutical sector is not actually the dominant sector in terms of sales. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Page, Trevor; Dubina, Henry; Fillipi, Gabriele; Guidat, Roland; Patnaik, Saroj; Poechlauer, Peter; Shering, Phil; Guinn, Martin; Mcdonnell, Peter; Johnston, Craig
2015-03-01
This white paper focuses on equipment, and analytical manufacturers' perspectives, regarding the challenges of continuous pharmaceutical manufacturing across five prompt questions. In addition to valued input from several vendors, commentary was provided from experienced pharmaceutical representatives, who have installed various continuous platforms. Additionally, a small medium enterprise (SME) perspective was obtained through interviews. A range of technical challenges is outlined, including: the presence of particles, equipment scalability, fouling (and cleaning), technology derisking, specific analytical challenges, and the general requirement of improved technical training. Equipment and analytical companies can make a significant contribution to help the introduction of continuous technology. A key point is that many of these challenges exist in batch processing and are not specific to continuous processing. Backward compatibility of software is not a continuous issue per se. In many cases, there is available learning from other industries. Business models and opportunities through outsourced development partners are also highlighted. Agile smaller companies and academic groups have a key role to play in developing skills, working collaboratively in partnerships, and focusing on solving relevant industry challenges. The precompetitive space differs for vendor companies compared with large pharmaceuticals. Currently, there is no strong consensus around a dominant continuous design, partly because of business dynamics and commercial interests. A more structured common approach to process design and hardware and software standardization would be beneficial, with initial practical steps in modeling. Conclusions include a digestible systems approach, accessible and published business cases, and increased user, academic, and supplier collaboration. This mirrors US FDA direction. The concept of silos in pharmaceutical companies is a common theme throughout the white papers. In the equipment domain, this is equally prevalent among a broad range of companies, mainly focusing on discrete areas. As an example, the flow chemistry and secondary drug product communities are almost entirely disconnected. Control and Process Analytical Technologies (PAT) companies are active in both domains. The equipment actors are a very diverse group with a few major Original Equipment Manufacturers (OEM) players and a variety of SME, project providers, integrators, upstream downstream providers, and specialist PAT. In some cases, partnerships or alliances are formed to increase critical mass. This white paper has focused on small molecules; equipment associated with biopharmaceuticals is covered in a separate white paper. More specifics on equipment detail are provided in final dosage form and drug substance white papers. The equipment and analytical development from laboratory to pilot to production is important, with a variety of sensors and complexity reducing with scale. The importance of robust processing rather than overcomplex control strategy mitigation is important. A search of nonacademic literature highlights, with a few notable exceptions, a relative paucity of material. Much focuses on the economics and benefits of continuous, rather than specifics of equipment issues. The disruptive nature of continuous manufacturing represents either an opportunity or a threat for many companies, so the incentive to change equipment varies. Also, for many companies, the pharmaceutical sector is not actually the dominant sector in terms of sales. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2011-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo., U.S.A. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring groundwater at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program was recently extended through the end of 2010 and is now completed. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water runoff effects. This report summarizes analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed for 2010. In general, the objective of each component of the study was to determine whether concentrations of nine trace elements ("priority analytes") (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied (background). Previous analytical results indicate that the elemental composition of biosolids from the Denver plant was consistent during 1999-2009, and this consistency continues with the samples for 2010. Total concentrations of regulated trace elements remain consistently lower than the regulatory limits for the entire monitoring period. Concentrations of none of the priority analytes appear to have increased during the 12 years of this study.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2010-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver, a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo., U.S.A. In cooperation with the Metro District in 1993, the U.S. Geological Survey began monitoring groundwater at part of this site. In 1999, the Survey began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program has recently been extended through the end of 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water effects. This report presents analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed for 2009. In general, the objective of each component of the study was to determine whether concentrations of nine trace elements ('priority analytes') (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied. Previous analytical results indicate that the elemental composition of biosolids from the Denver plant was consistent during 1999-2008, and this consistency continues with the samples for 2009. Total concentrations of regulated trace elements remain consistently lower than the regulatory limits for the entire monitoring period. Concentrations of none of the priority analytes appear to have increased during the 11 years of this study.
Koskinen, M T; Holopainen, J; Pyörälä, S; Bredbacka, P; Pitkälä, A; Barkema, H W; Bexiga, R; Roberson, J; Sølverød, L; Piccinini, R; Kelton, D; Lehmusto, H; Niskala, S; Salmikivi, L
2009-03-01
Intramammary infection (IMI), also known as mastitis, is the most frequently occurring and economically the most important infectious disease in dairy cattle. This study provides a validation of the analytical specificity and sensitivity of a real-time PCR-based assay that identifies 11 major pathogen species or species groups responsible for IMI, and a gene coding for staphylococcal beta-lactamase production (penicillin resistance). Altogether, 643 culture isolates originating from clinical bovine mastitis, human, and companion animal samples were analyzed using the assay. The isolates represented 83 different species, groups, or families, and originated from 6 countries in Europe and North America. The analytical specificity and sensitivity of the assay was 100% in bacterial and beta-lactamase identification across all isolates originating from bovine mastitis (n = 454). When considering the entire culture collection (including also the isolates originating from human and companion animal samples), 4 Streptococcus pyogenes, 1 Streptococcus salivarius, and 1 Streptococcus sanguis strain of human origin were identified as Streptococcus uberis, and 3 Shigella spp. strains were identified as Escherichia coli, decreasing specificity to 99% in Strep. uberis and to 99.5% in E. coli. These false-positive results were confirmed by sequencing of the 16S rRNA gene. Specificity and sensitivity remained at 100% for all other bacterial targets across the entire culture collection. In conclusion, the real-time PCR assay shows excellent analytical accuracy and holds much promise for use in routine bovine IMI testing programs. This study provides the basis for evaluating the assay's diagnostic performance against the conventional bacterial culture method in clinical field trials using mastitis milk samples.
Fall Velocities of Hydrometeors in the Atmosphere: Refinements to a Continuous Analytical Power Law.
NASA Astrophysics Data System (ADS)
Khvorostyanov, Vitaly I.; Curry, Judith A.
2005-12-01
This paper extends the previous research of the authors on the unified representation of fall velocities for both liquid and crystalline particles as a power law over the entire size range of hydrometeors observed in the atmosphere. The power-law coefficients are determined as continuous analytical functions of the Best or Reynolds number or of the particle size. Here, analytical expressions are formulated for the turbulent corrections to the Reynolds number and to the power-law coefficients that describe the continuous transition from the laminar to the turbulent flow around a falling particle. A simple analytical expression is found for the correction of fall velocities for temperature and pressure. These expressions and the resulting fall velocities are compared with observations and other calculations for a range of ice crystal habits and sizes. This approach provides a continuous analytical power-law description of the terminal velocities of liquid and crystalline hydrometeors with sufficiently high accuracy and can be directly used in bin-resolving models or incorporated into parameterizations for cloud- and large-scale models and remote sensing techniques.
"Decknamen or pseudochemical language"? Eirenaeus Philalethes and Carl Jung.
Newman, W R
1996-01-01
It is impossible to investigate the historiography of alchemy without encountering the ideas of the "father of analytical psychology", Carl Jung. Jung argued that alchemy, viewed as a diachronic, trans-cultural entity, was concerned more with psychological states occurring in the mind of the practitioner than with real chemical processes. In the course of elucidating this idea, Jung draws on a number of alchemical authors from the early modern period. One of these is Eirenaeus Philalethes, the pen name of George Starkey (1628-1665), a native of Bermuda who was educated at Harvard College, and who later immigrated to London. A careful analysis of Starkey's work shows, however, that Jung was entirely wrong in his assessment of this important representative of seventeenth-century alchemy. This finding casts serious doubt on the Jungian interpretation of alchemy as a whole.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
NASA Astrophysics Data System (ADS)
Jennings, E. S.; Wade, J.; Laurenz, V.; Kearns, S.; Buse, B.; Rubie, D. C.
2017-12-01
The process by which the Earth's core segregated, and its resulting composition, can be inferred from the composition of the bulk silicate Earth if the partitioning of various elements into metal at relevant conditions is known. As such, partitioning experiments between liquid metal and liquid silicate over a wide range of pressures and temperatures are frequently performed to constrain the partitioning behaviour of many elements. The use of diamond anvil cell experiments to access more extreme conditions than those achievable by larger volume presses is becoming increasingly common. With a volume several orders of magnitude smaller than conventional samples, these experiments present unique analytical challenges. Typically, sample preparation is performed by FIB as a 2 mm thick slice, containing a small iron ball surrounded by a layer of silicate melt. This implies that analyses made by EPMA will be made near boundaries where fluoresced X-rays from the neighbouring phase may be significant. By measuring and simulating synthetic samples, we investigate thickness and fluorescence limitations. We find that for typical sample geometries, a thickness of 2 μm contains the entire analytical volume for standard 15kV analyses of metals. Fluoresced X-rays from light elements into the metal are below detection limits if there is no direct electron interaction with the silicate. Continuum fluorescence from higher atomic number elements from the metal into silicate poses significant difficulties [1]. This can cause metal-silicate partition coefficients of siderophile elements to be underestimated. Finally, we examine the origin and analytical consequences of oxide-rich exsolutions that are frequently found in the metal phase of such experiments. These are spherical with diameters of 100 nm and can be sparsely to densely packed. They appear to be carbon-rich and result in low analytical totals by violating the assumption of homogeneity in matrix corrections (e.g. φρz), which results in incorrect relative abundances. Using low kV analysis, we explore their origin i.e. whether they originate from quench exsolution or dynamic processes. Identifying their composition is key to understanding their origin and the interpretation of DAC experimental results.[1] Wade J & Wood B. J. (2012) PEPI 192-193, 54-58.
Medical Student Use of Facebook to Support Preparation for Anatomy Assessments
ERIC Educational Resources Information Center
Pickering, James D.; Bickerdike, Suzanne R.
2017-01-01
The use of Facebook to support students is an emerging area of educational research. This study explored how a Facebook Page could support Year 2 medical (MBChB) students in preparation for summative anatomy assessments and alleviate test anxiety. Overall, Facebook analytics revealed that in total 49 (19.8% of entire cohort) students posted a…
Forest resources of the eastern Ozark Region in Missouri
The Forest Survey Organization Central States Forest Experiment Station
1948-01-01
This Survey Release presents the more significant statistics on forest area and timber volume in 14 counties in the Eastern Ozark region of Missouri. As soon as statistical tabulations have been completed other releases will be issued giving similar information for the other important subdivisions of the State. Later an analytical report for the entire State will be...
Forest resources of the Riverborder region in Missouri
The Forest Survey Organization Central States Forest Experiment Station
1948-01-01
This Survey Release presents the more significant statistics on forest area and timber volume in the Riverborder region of eastern Missouri. Similar releases have been issued for the other forest regions of the State. A summary release giving similar data for the entire State will be published shortly. Later an analytical report for the State will be published, which...
Parsec-Scale Obscuring Accretion Disk with Large-Scale Magnetic Field in AGNs
NASA Technical Reports Server (NTRS)
Dorodnitsyn, A.; Kallman, T.
2017-01-01
A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc (parsec) -scale torus in AGNs (Active Galactic Nuclei). Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate that the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.
Prediction of turning stability using receptance coupling
NASA Astrophysics Data System (ADS)
Jasiewicz, Marcin; Powałka, Bartosz
2018-01-01
This paper presents an issue of machining stability prediction of dynamic "lathe - workpiece" system evaluated using receptance coupling method. Dynamic properties of the lathe components (the spindle and the tailstock) are assumed to be constant and can be determined experimentally based on the results of the impact test. Hence, the variable of the system "machine tool - holder - workpiece" is the machined part, which can be easily modelled analytically. The method of receptance coupling enables a synthesis of experimental (spindle, tailstock) and analytical (machined part) models, so impact testing of the entire system becomes unnecessary. The paper presents methodology of analytical and experimental models synthesis, evaluation of the stability lobes and experimental validation procedure involving both the determination of the dynamic properties of the system and cutting tests. In the summary the experimental verification results would be presented and discussed.
Parsec-scale Obscuring Accretion Disk with Large-scale Magnetic Field in AGNs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorodnitsyn, A.; Kallman, T.
A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc-scale torus in AGNs. Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate thatmore » the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.« less
Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang
2016-10-01
Currently, near infrared spectroscopy (NIRS) has been considered as an efficient tool for achieving process analytical technology(PAT) in the manufacture of traditional Chinese medicine (TCM) products. In this article, the NIRS based process analytical system for the production of salvianolic acid for injection was introduced. The design of the process analytical system was described in detail, including the selection of monitored processes and testing mode, and potential risks that should be avoided. Moreover, the development of relative technologies was also presented, which contained the establishment of the monitoring methods for the elution of polyamide resin and macroporous resin chromatography processes, as well as the rapid analysis method for finished products. Based on author's experience of research and work, several issues in the application of NIRS to the process monitoring and control in TCM production were then raised, and some potential solutions were also discussed. The issues include building the technical team for process analytical system, the design of the process analytical system in the manufacture of TCM products, standardization of the NIRS-based analytical methods, and improving the management of process analytical system. Finally, the prospect for the application of NIRS in the TCM industry was put forward. Copyright© by the Chinese Pharmaceutical Association.
A normal tissue dose response model of dynamic repair processes.
Alber, Markus; Belka, Claus
2006-01-07
A model is presented for serial, critical element complication mechanisms for irradiated volumes from length scales of a few millimetres up to the entire organ. The central element of the model is the description of radiation complication as the failure of a dynamic repair process. The nature of the repair process is seen as reestablishing the structural organization of the tissue, rather than mere replenishment of lost cells. The interactions between the cells, such as migration, involved in the repair process are assumed to have finite ranges, which limits the repair capacity and is the defining property of a finite-sized reconstruction unit. Since the details of the repair processes are largely unknown, the development aims to make the most general assumptions about them. The model employs analogies and methods from thermodynamics and statistical physics. An explicit analytical form of the dose response of the reconstruction unit for total, partial and inhomogeneous irradiation is derived. The use of the model is demonstrated with data from animal spinal cord experiments and clinical data about heart, lung and rectum. The three-parameter model lends a new perspective to the equivalent uniform dose formalism and the established serial and parallel complication models. Its implications for dose optimization are discussed.
Development of a software tool to support chemical and biological terrorism intelligence analysis
NASA Astrophysics Data System (ADS)
Hunt, Allen R.; Foreman, William
1997-01-01
AKELA has developed a software tool which uses a systems analytic approach to model the critical processes which support the acquisition of biological and chemical weapons by terrorist organizations. This tool has four major components. The first is a procedural expert system which describes the weapon acquisition process. It shows the relationship between the stages a group goes through to acquire and use a weapon, and the activities in each stage required to be successful. It applies to both state sponsored and small group acquisition. An important part of this expert system is an analysis of the acquisition process which is embodied in a list of observables of weapon acquisition activity. These observables are cues for intelligence collection The second component is a detailed glossary of technical terms which helps analysts with a non- technical background understand the potential relevance of collected information. The third component is a linking capability which shows where technical terms apply to the parts of the acquisition process. The final component is a simple, intuitive user interface which shows a picture of the entire process at a glance and lets the user move quickly to get more detailed information. This paper explains e each of these five model components.
Is Analytic Information Processing a Feature of Expertise in Medicine?
ERIC Educational Resources Information Center
McLaughlin, Kevin; Rikers, Remy M.; Schmidt, Henk G.
2008-01-01
Diagnosing begins by generating an initial diagnostic hypothesis by automatic information processing. Information processing may stop here if the hypothesis is accepted, or analytical processing may be used to refine the hypothesis. This description portrays analytic processing as an optional extra in information processing, leading us to…
Invited Review Small is beautiful: The analysis of nanogram-sized astromaterials
NASA Astrophysics Data System (ADS)
Zolensky, M. E.; Pieters, C.; Clark, B.; Papike, J. J.
2000-01-01
The capability of modern methods to characterize ultra-small samples is well established from analysis of interplanetary dust particles (IDPs), interstellar grains recovered from meteorites, and other materials requiring ultra-sensitive analytical capabilities. Powerful analytical techniques are available that require, under favorable circumstances, single particles of only a few nanograms for entire suites of fairly comprehensive characterizations. A returned sample of >1,000 particles with total mass of just one microgram permits comprehensive quantitative geochemical measurements that are impractical to carry out in situ by flight instruments. The main goal of this paper is to describe the state-of-the-art in microanalysis of astromaterials. Given that we can analyze fantastically small quantities of asteroids and comets, etc., we have to ask ourselves how representative are microscopic samples of bodies that measure a few to many km across? With the Galileo flybys of Gaspra and Ida, it is now recognized that even very small airless bodies have indeed developed a particulate regolith. Acquiring a sample of the bulk regolith, a simple sampling strategy, provides two critical pieces of information about the body. Regolith samples are excellent bulk samples since they normally contain all the key components of the local environment, albeit in particulate form. Furthermore, since this fine fraction dominates remote measurements, regolith samples also provide information about surface alteration processes and are a key link to remote sensing of other bodies. Studies indicate that a statistically significant number of nanogram-sized particles should be able to characterize the regolith of a primitive asteroid, although the presence of larger components within even primitive meteorites (e.g.. Murchison), e.g. chondrules, CAI, large crystal fragments, etc., points out the limitations of using data obtained from nanogram-sized samples to characterize entire primitive asteroids. However, most important asteroidal geological processes have left their mark on the matrix, since this is the finest-grained portion and therefore most sensitive to chemical and physical changes. Thus, the following information can be learned from this fine grain size fraction alone: (1) mineral paragenesis; (2) regolith processes, (3) bulk composition; (4) conditions of thermal and aqueous alteration (if any); (5) relationships to planets, comets, meteorites (via isotopic analyses, including oxygen; (6) abundance of water and hydrated material; (7) abundance of organics; (8) history of volatile mobility, (9) presence and origin of presolar and/or interstellar material. Most of this information can even be obtained from dust samples from bodies for which nanogram-sized samples are not truly representative. Future advances in sensitivity and accuracy of laboratory analytical techniques can be expected to enhance the science value of nano- to microgram sized samples even further. This highlights a key advantage of sample returns - that the most advanced analysis techniques can always be applied in the laboratory, and that well-preserved samples are available for future investigations.
Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N
2014-06-20
The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Exact and Approximate Solutions for Transient Squeezing Flow
NASA Astrophysics Data System (ADS)
Lang, Ji; Santhanam, Sridhar; Wu, Qianhong
2017-11-01
In this paper, we report two novel theoretical approaches to examine a fast-developing flow in a thin fluid gap, which is widely observed in industrial applications and biological systems. The problem is featured by a very small Reynolds number and Strouhal number, making the fluid convective acceleration is negligible, while its local acceleration is not. We have developed an exact solution for this problem which shows that the flow starts with an inviscid limit when the viscous effect has no time to appear, and is followed by a subsequent developing flow, in which the viscous effect continues to penetrate into the entire fluid gap. An approximate solution is also developed using a boundary layer integral method. This solution precisely captures the general behavior of the transient fluid flow process, and agrees very well with the exact solution. We also performed numerical simulation using Ansys-CFX. Excellent agreement between the analytical and the numerical solutions is obtained, indicating the validity of the analytical approaches. The study presented herein fills the gap in the literature, and will have a broad impact in industrial and biomedical applications. This work is supported by National Science Foundation CBET Fluid Dynamics Program under Award #1511096, and supported by the Seed Grant from The Villanova Center for the Advancement of Sustainability in Engineering (VCASE).
Olivares, David; Bravo, Manuel; Feldmann, Jorg; Raab, Andrea; Neaman, Alexander; Quiroz, Waldo
2012-01-01
A new method for antimony speciation in terrestrial edible vegetables (spinach, onions, and carrots) was developed using HPLC with hydride generation-atomic fluorescence spectrometry. Mechanical agitation and ultrasound were tested as extraction techniques. Different extraction reagents were evaluated and optimal conditions were determined using experimental design methodology, where EDTA (10 mmol/L, pH 2.5) was selected because this chelate solution produced the highest extraction yield and exhibited the best compatibility with the mobile phase. The results demonstrated that EDTA prevents oxidation of Sb(III) to Sb(V) and maintains the stability of antimony species during the entire analytical process. The LOD and precision (RSD values obtained) for Sb(V), Sb(III), and trimethyl Sb(V) were 0.08, 0.07, and 0.9 microg/L and 5.0, 5.2, and 4.7%, respectively, for a 100 microL sample volume. The application of this method to real samples allowed extraction of 50% of total antimony content from spinach, while antimony extracted from carrots and onion samples ranged between 50 and 60 and 54 and 70%, respectively. Only Sb(V) was detected in three roots (onion and spinach) that represented 60-70% of the total antimony in the extracts.
Focant, Jean-François; Eppe, Gauthier; Massart, Anne-Cécile; Scholl, Georges; Pirard, Catherine; De Pauw, Edwin
2006-10-13
We report on the use of a state-of-the-art method for the measurement of selected polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans and polychlorinated biphenyls in human serum specimens. The sample preparation procedure is based on manual small size solid-phase extraction (SPE) followed by automated clean-up and fractionation using multi-sorbent liquid chromatography columns. SPE cartridges and all clean-up columns are disposable. Samples are processed in batches of 20 units, including one blank control (BC) sample and one quality control (QC) sample. The analytical measurement is performed using gas chromatography coupled to isotope dilution high-resolution mass spectrometry. The sample throughput corresponds to one series of 20 samples per day, from sample reception to data quality cross-check and reporting, once the procedure has been started and series of samples keep being produced. Four analysts are required to ensure proper performances of the procedure. The entire procedure has been validated under International Organization for Standardization (ISO) 17025 criteria and further tested over more than 1500 unknown samples during various epidemiological studies. The method is further discussed in terms of reproducibility, efficiency and long-term stability regarding the 35 target analytes. Data related to quality control and limit of quantification (LOQ) calculations are also presented and discussed.
Miller, Tyler M; Geraci, Lisa
2016-05-01
People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. Copyright © 2016 Elsevier Inc. All rights reserved.
Zadran, Sohila; Levine, Raphael D
2013-01-01
Metabolic engineering seeks to redirect metabolic pathways through the modification of specific biochemical reactions or the introduction of new ones with the use of recombinant technology. Many of the chemicals synthesized via introduction of product-specific enzymes or the reconstruction of entire metabolic pathways into engineered hosts that can sustain production and can synthesize high yields of the desired product as yields of natural product-derived compounds are frequently low, and chemical processes can be both energy and material expensive; current endeavors have focused on using biologically derived processes as alternatives to chemical synthesis. Such economically favorable manufacturing processes pursue goals related to sustainable development and "green chemistry". Metabolic engineering is a multidisciplinary approach, involving chemical engineering, molecular biology, biochemistry, and analytical chemistry. Recent advances in molecular biology, genome-scale models, theoretical understanding, and kinetic modeling has increased interest in using metabolic engineering to redirect metabolic fluxes for industrial and therapeutic purposes. The use of metabolic engineering has increased the productivity of industrially pertinent small molecules, alcohol-based biofuels, and biodiesel. Here, we highlight developments in the practical and theoretical strategies and technologies available for the metabolic engineering of simple systems and address current limitations.
Kang, Jian; Zhang, Jixin; Bai, Yongqiang
2016-12-15
An evaluation of the oil-spill emergency response capability (OS-ERC) currently in place in modern marine management is required to prevent pollution and loss accidents. The objective of this paper is to develop a novel OS-ERC evaluation model, the importance of which stems from the current lack of integrated approaches for interpreting, ranking and assessing OS-ERC performance factors. In the first part of this paper, the factors influencing OS-ERC are analyzed and classified to generate a global evaluation index system. Then, a semantic tree is adopted to illustrate linguistic variables in the evaluation process, followed by the application of a combination of Fuzzy Cognitive Maps (FCM) and the Analytic Hierarchy Process (AHP) to construct and calculate the weight distribution. Finally, considering that the OS-ERC evaluation process is a complex system, a fuzzy comprehensive evaluation (FCE) is employed to calculate the OS-ERC level. The entire evaluation framework obtains the overall level of OS-ERC, and also highlights the potential major issues concerning OS-ERC, as well as expert opinions for improving the feasibility of oil-spill accident prevention and protection. Copyright © 2016 Elsevier Ltd. All rights reserved.
Synchronicity - The Link Between Physics and Psyche, from Pauli and Jung to Chopra
NASA Astrophysics Data System (ADS)
Teodorani, M.
2006-07-01
This book, which is entirely dedicated to the mystery of "synchronicity", is divided into three parts: a) the joint research between analytic psychologist Carl Gustav Jung and quantum physicist Wolfgang Pauli; b) synchronicity mechanisms occurring in the microscopic (canonical quantum entanglement), mesoscopic and macroscopic scales; c) research and philosophy concerning synchronicity by MD Deepak Chopra.
Masuda, Takahiko; Nisbett, Richard E
2006-03-04
Research on perception and cognition suggests that whereas East Asians view the world holistically, attending to the entire field and relations among objects, Westerners view the world analytically, focusing on the attributes of salient objects. These propositions were examined in the change-blindness paradigm. Research in that paradigm finds American participants to be more sensitive to changes in focal objects than to changes in the periphery or context. We anticipated that this would be less true for East Asians and that they would be more sensitive to context changes than would Americans. We presented participants with still photos and with animated vignettes having changes in focal object information and contextual information. Compared to Americans, East Asians were more sensitive to contextual changes than to focal object changes. These results suggest that there can be cultural variation in what may seem to be basic perceptual processes. 2006 Lawrence Erlbaum Associates, Inc.
NASA Astrophysics Data System (ADS)
Lee, Yang-Sub
A time-domain numerical algorithm for solving the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear parabolic wave equation is developed for pulsed, axisymmetric, finite amplitude sound beams in thermoviscous fluids. The KZK equation accounts for the combined effects of diffraction, absorption, and nonlinearity at the same order of approximation. The accuracy of the algorithm is established via comparison with analytical solutions for several limiting cases, and with numerical results obtained from a widely used algorithm for solving the KZK equation in the frequency domain. The time domain algorithm is used to investigate waveform distortion and shock formation in directive sound beams radiated by pulsed circular piston sources. New results include predictions for the entire process of self-demodulation, and for the effect of frequency modulation on pulse envelope distortion. Numerical results are compared with measurements, and focused sources are investigated briefly.
Advances in spectroscopic methods for quantifying soil carbon
Liebig, Mark; Franzluebbers, Alan J.; Follett, Ronald F.; Hively, W. Dean; Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco
2012-01-01
The gold standard for soil C determination is combustion. However, this method requires expensive consumables, is limited to the determination of the total carbon and in the number of samples which can be processed (~100/d). With increased interest in soil C sequestration, faster methods are needed. Thus, interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared ranges using either proximal or remote sensing. These methods have the ability to analyze more samples (2 to 3X/d) or huge areas (imagery) and do multiple analytes simultaneously, but require calibrations relating spectral and reference data and have specific problems, i.e., remote sensing is capable of scanning entire watersheds, thus reducing the sampling needed, but is limiting to the surface layer of tilled soils and by difficulty in obtaining proper calibration reference values. The objective of this discussion is the present state of spectroscopic methods for soil C determination.
NASA Astrophysics Data System (ADS)
Banegas, Frederic; Michelucci, Dominique; Roelens, Marc; Jaeger, Marc
1999-05-01
We present a robust method for automatically constructing an ellipsoidal skeleton (e-skeleton) from a set of 3D points taken from NMR or TDM images. To ensure steadiness and accuracy, all points of the objects are taken into account, including the inner ones, which is different from the existing techniques. This skeleton will be essentially useful for object characterization, for comparisons between various measurements and as a basis for deformable models. It also provides good initial guess for surface reconstruction algorithms. On output of the entire process, we obtain an analytical description of the chosen entity, semantically zoomable (local features only or reconstructed surfaces), with any level of detail (LOD) by discretization step control in voxel or polygon format. This capability allows us to handle objects at interactive frame rates once the e-skeleton is computed. Each e-skeleton is stored as a multiscale CSG implicit tree.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Two-dimensional numerical simulation of a Stirling engine heat exchanger
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Tew, Roy C.; Dudenhoefer, James E.
1989-01-01
The first phase of an effort to develop multidimensional models of Stirling engine components is described; the ultimate goal is to model an entire engine working space. More specifically, parallel plate and tubular heat exchanger models with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects) are described. The model assumes: laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations, describing the model, were solved using Crank-Nicloson finite-difference scheme. Model predictions were compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement was obtained for the model predictions with analytical solutions available for both flow in circular tubes and between parallel plates. Also the heat transfer computational results are in good agreement with the heat transfer analytical results for parallel plates.
NASA Astrophysics Data System (ADS)
Panigrahi, Suraj Kumar; Mishra, Ashok Kumar
2018-02-01
White light excitation fluorescence (WLEF) is known to possess analytical advantage in terms of enhanced sensitivity and facile capture of the entire fluorescence spectral signature of multi component fluorescence systems. Using the zero order diffraction of the grating monochromator on the excitation side of a commercial spectrofluorimeter, it has been shown that WLEF spectral measurements can be conveniently carried out. Taking analyte multi-fluorophoric systems like (i) drugs and vitamins spiked in urine sample, (ii) adulteration of extra virgin olive oil with olive pomace oil and (iii) mixture of fabric dyes, it was observed that there is a significant enhancement of measurement sensitivity. The total fluorescence spectral response could be conveniently analysed using PLS2 regression. This work brings out the ease of the use of a conventional fluorimeter for WLEF measurements.
Diffraction and microscopy with attosecond electron pulse trains
NASA Astrophysics Data System (ADS)
Morimoto, Yuya; Baum, Peter
2018-03-01
Attosecond spectroscopy1-7 can resolve electronic processes directly in time, but a movie-like space-time recording is impeded by the too long wavelength ( 100 times larger than atomic distances) or the source-sample entanglement in re-collision techniques8-11. Here we advance attosecond metrology to picometre wavelength and sub-atomic resolution by using free-space electrons instead of higher-harmonic photons1-7 or re-colliding wavepackets8-11. A beam of 70-keV electrons at 4.5-pm de Broglie wavelength is modulated by the electric field of laser cycles into a sequence of electron pulses with sub-optical-cycle duration. Time-resolved diffraction from crystalline silicon reveals a < 10-as delay of Bragg emission and demonstrates the possibility of analytic attosecond-ångström diffraction. Real-space electron microscopy visualizes with sub-light-cycle resolution how an optical wave propagates in space and time. This unification of attosecond science with electron microscopy and diffraction enables space-time imaging of light-driven processes in the entire range of sample morphologies that electron microscopy can access.
Use Hierarchical Storage and Analysis to Exploit Intrinsic Parallelism
NASA Astrophysics Data System (ADS)
Zender, C. S.; Wang, W.; Vicente, P.
2013-12-01
Big Data is an ugly name for the scientific opportunities and challenges created by the growing wealth of geoscience data. How to weave large, disparate datasets together to best reveal their underlying properties, to exploit their strengths and minimize their weaknesses, to continually aggregate more information than the world knew yesterday and less than we will learn tomorrow? Data analytics techniques (statistics, data mining, machine learning, etc.) can accelerate pattern recognition and discovery. However, often researchers must, prior to analysis, organize multiple related datasets into a coherent framework. Hierarchical organization permits entire dataset to be stored in nested groups that reflect their intrinsic relationships and similarities. Hierarchical data can be simpler and faster to analyze by coding operators to automatically parallelize processes over isomorphic storage units, i.e., groups. The newest generation of netCDF Operators (NCO) embody this hierarchical approach, while still supporting traditional analysis approaches. We will use NCO to demonstrate the trade-offs involved in processing a prototypical Big Data application (analysis of CMIP5 datasets) using hierarchical and traditional analysis approaches.
Statistical Physics of Population Genetics in the Low Population Size Limit
NASA Astrophysics Data System (ADS)
Atwal, Gurinder
The understanding of evolutionary processes lends itself naturally to theory and computation, and the entire field of population genetics has benefited greatly from the influx of methods from applied mathematics for decades. However, in spite of all this effort, there are a number of key dynamical models of evolution that have resisted analytical treatment. In addition, modern DNA sequencing technologies have magnified the amount of genetic data available, revealing an excess of rare genetic variants in human genomes, challenging the predictions of conventional theory. Here I will show that methods from statistical physics can be used to model the distribution of genetic variants, incorporating selection and spatial degrees of freedom. In particular, a functional path-integral formulation of the Wright-Fisher process maps exactly to the dynamics of a particle in an effective potential, beyond the mean field approximation. In the small population size limit, the dynamics are dominated by instanton-like solutions which determine the probability of fixation in short timescales. These results are directly relevant for understanding the unusual genetic variant distribution at moving frontiers of populations.
Thermodynamic modeling of the no-vent fill methodology for transferring cryogens in low gravity
NASA Technical Reports Server (NTRS)
Chato, David J.
1988-01-01
The filling of tanks with cryogens in the low-gravity environment of space poses many technical challenges. Chief among these is the inability to vent only vapor from the tank as the filling proceeds. As a potential solution to this problem, the NASA Lewis Research Center is researching a technique known as No-Vent Fill. This technology potentially has broad application. The focus is the fueling of space based Orbital Transfer Vehicles. The fundamental thermodynamics of the No-Vent Fill process to develop an analytical model of No-Vent Fill is described. The model is then used to conduct a parametric investigation of the key parameters: initial tank wall temperature, liquid-vapor interface heat transfer rate, liquid inflow rate, and inflowing liquid temperatures. Liquid inflowing temperature and the liquid-vapor interface heat transfer rate seem to be the most significant since they influence the entire fill process. The initial tank wall temperature must be sufficiently low to prevent a rapid pressure rise during the initial liquid flashing stage, but then becomes less significant.
NASA Astrophysics Data System (ADS)
Shabanov, S. V.; Gornushkin, I. B.
2018-01-01
Data processing in the calibration-free laser-induced breakdown spectroscopy (LIBS) is usually based on the solution of the radiative transfer equation along a particular line of sight through a plasma plume. The LIBS data processing is generalized to the case when the spectral data are collected from large portions of the plume. It is shown that by adjusting the optical depth and width of the lines the spectra obtained by collecting light from an entire spherical homogeneous plasma plume can be least-square fitted to a spectrum obtained by collecting the radiation just along a plume diameter with a relative error of 10-11 or smaller (for the optical depth not exceeding 0.3) so that a mismatch of geometries of data processing and data collection cannot be detected by fitting. Despite the existence of such a perfect least-square fit, the errors in the line optical depth and width found by a data processing with an inappropriate geometry can be large. It is shown with analytic and numerical examples that the corresponding relative errors in the found elemental number densities and concentrations may be as high as 50% and 20%, respectively. Safe for a few found exceptions, these errors are impossible to eliminate from LIBS data processing unless a proper solution of the radiative transfer equation corresponding to the ray tracing in the spectral data collection is used.
Neurocognitive inefficacy of the strategy process.
Klein, Harold E; D'Esposito, Mark
2007-11-01
The most widely used (and taught) protocols for strategic analysis-Strengths, Weaknesses, Opportunities, and Threats (SWOT) and Porter's (1980) Five Force Framework for industry analysis-have been found to be insufficient as stimuli for strategy creation or even as a basis for further strategy development. We approach this problem from a neurocognitive perspective. We see profound incompatibilities between the cognitive process-deductive reasoning-channeled into the collective mind of strategists within the formal planning process through its tools of strategic analysis (i.e., rational technologies) and the essentially inductive reasoning process actually needed to address ill-defined, complex strategic situations. Thus, strategic analysis protocols that may appear to be and, indeed, are entirely rational and logical are not interpretable as such at the neuronal substrate level where thinking takes place. The analytical structure (or propositional representation) of these tools results in a mental dead end, the phenomenon known in cognitive psychology as functional fixedness. The difficulty lies with the inability of the brain to make out meaningful (i.e., strategy-provoking) stimuli from the mental images (or depictive representations) generated by strategic analysis tools. We propose decreasing dependence on these tools and conducting further research employing brain imaging technology to explore complex data handling protocols with richer mental representation and greater potential for strategy creation.
Surface enhanced Raman scattering spectroscopic waveguide
Lascola, Robert J; McWhorter, Christopher S; Murph, Simona H
2015-04-14
A waveguide for use with surface-enhanced Raman spectroscopy is provided that includes a base structure with an inner surface that defines a cavity and that has an axis. Multiple molecules of an analyte are capable of being located within the cavity at the same time. A base layer is located on the inner surface of the base structure. The base layer extends in an axial direction along an axial length of an excitation section. Nanoparticles are carried by the base layer and may be uniformly distributed along the entire axial length of the excitation section. A flow cell for introducing analyte and excitation light into the waveguide and a method of applying nanoparticles may also be provided.
Modeling and Analysis of Large Amplitude Flight Maneuvers
NASA Technical Reports Server (NTRS)
Anderson, Mark R.
2004-01-01
Analytical methods for stability analysis of large amplitude aircraft motion have been slow to develop because many nonlinear system stability assessment methods are restricted to a state-space dimension of less than three. The proffered approach is to create regional cell-to-cell maps for strategically located two-dimensional subspaces within the higher-dimensional model statespace. These regional solutions capture nonlinear behavior better than linearized point solutions. They also avoid the computational difficulties that emerge when attempting to create a cell map for the entire state-space. Example stability results are presented for a general aviation aircraft and a micro-aerial vehicle configuration. The analytical results are consistent with characteristics that were discovered during previous flight-testing.
RUPTURES IN THE ANALYTIC SETTING AND DISTURBANCES IN THE TRANSFORMATIONAL FIELD OF DREAMS.
Brown, Lawrence J
2015-10-01
This paper explores some implications of Bleger's (1967, 2013) concept of the analytic situation, which he views as comprising the analytic setting and the analytic process. The author discusses Bleger's idea of the analytic setting as the depositary for projected painful aspects in either the analyst or patient or both-affects that are then rendered as nonprocess. In contrast, the contents of the analytic process are subject to an incessant process of transformation (Green 2005). The author goes on to enumerate various components of the analytic setting: the nonhuman, object relational, and the analyst's "person" (including mental functioning). An extended clinical vignette is offered as an illustration. © 2015 The Psychoanalytic Quarterly, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, J.P.
The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.
Characterization and manufacture of braided composites for large commercial aircraft structures
NASA Technical Reports Server (NTRS)
Fedro, Mark J.; Willden, Kurtis
1992-01-01
Braided composite materials, one of the advanced material forms which is under investigation in Boeing's ATCAS program, have been recognized as a potential cost-effective material form for fuselage structural elements. Consequently, there is a strong need for more knowledge in the design, manufacture, test, and analysis of textile structural composites. The overall objective of this work is to advance braided composite technology towards applications to a large commercial transport fuselage. This paper summarizes the mechanics of materials and manufacturing demonstration results which have been obtained in order to acquire an understanding of how braided composites can be applied to a commercial fuselage. Textile composites consisting of 1D, 2D triaxial, and 3D braid patterns with thermoplastic and two RTM resin systems were investigated. The structural performance of braided composites was evaluated through an extensive mechanical test program. Analytical methods were also developed and applied to predict the following: internal fiber architectures, stiffnesses, fiber stresses, failure mechanisms, notch effects, and the entire history of failure of the braided composites specimens. The applicability of braided composites to a commercial transport fuselage was further assessed through a manufacturing demonstration. Three foot fuselage circumferential hoop frames were manufactured to demonstrate the feasibility of consistently producing high quality braided/RTM composite primary structures. The manufacturing issues (tooling requirements, processing requirements, and process/quality control) addressed during the demonstration are summarized. The manufacturing demonstration in conjunction with the mechanical test results and developed analytical methods increased the confidence in the ATCAS approach to the design, manufacture, test, and analysis of braided composites.
An integrated paper-based sample-to-answer biosensor for nucleic acid testing at the point of care.
Choi, Jane Ru; Hu, Jie; Tang, Ruihua; Gong, Yan; Feng, Shangsheng; Ren, Hui; Wen, Ting; Li, XiuJun; Wan Abas, Wan Abu Bakar; Pingguan-Murphy, Belinda; Xu, Feng
2016-02-07
With advances in point-of-care testing (POCT), lateral flow assays (LFAs) have been explored for nucleic acid detection. However, biological samples generally contain complex compositions and low amounts of target nucleic acids, and currently require laborious off-chip nucleic acid extraction and amplification processes (e.g., tube-based extraction and polymerase chain reaction (PCR)) prior to detection. To the best of our knowledge, even though the integration of DNA extraction and amplification into a paper-based biosensor has been reported, a combination of LFA with the aforementioned steps for simple colorimetric readout has not yet been demonstrated. Here, we demonstrate for the first time an integrated paper-based biosensor incorporating nucleic acid extraction, amplification and visual detection or quantification using a smartphone. A handheld battery-powered heating device was specially developed for nucleic acid amplification in POC settings, which is coupled with this simple assay for rapid target detection. The biosensor can successfully detect Escherichia coli (as a model analyte) in spiked drinking water, milk, blood, and spinach with a detection limit of as low as 10-1000 CFU mL(-1), and Streptococcus pneumonia in clinical blood samples, highlighting its potential use in medical diagnostics, food safety analysis and environmental monitoring. As compared to the lengthy conventional assay, which requires more than 5 hours for the entire sample-to-answer process, it takes about 1 hour for our integrated biosensor. The integrated biosensor holds great potential for detection of various target analytes for wide applications in the near future.
Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario
2013-11-01
Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.
Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai
2013-01-01
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Experimental Results from the Thermal Energy Storage-2 (TES-2) Flight Experiment
NASA Technical Reports Server (NTRS)
Tolbert, Carol
2000-01-01
Thermal Energy Storage-2 (TES-2) is a flight experiment that flew on the Space Shuttle Endeavour (STS-72), in January 1996. TES-2 originally flew with TES-1 as part of the OAST-2 Hitchhiker payload on the Space Shuttle Columbia (STS-62) in early 1994. The two experiments, TES-1 and TES-2 were identical except for the fluoride salts to be characterized. TES-1 provided data on lithium fluoride (LiF), TES-2 provided data on a fluoride eutectic (LiF/CaF2). Each experiment was a complex autonomous payload in a Get-Away-Special payload canister. TES-1 operated flawlessly for 22 hr. Results were reported in a paper entitled, Effect of Microgravity on Materials Undergoing Melting and Freezing-The TES Experiment, by David Namkoong et al. A software failure in TES-2 caused its shutdown after 4 sec of operation. TES-1 and 2 were the first experiments in a four experiment suite designed to provide data for understanding the long duration microgravity behavior of thermal energy storage salts that undergo repeated melting and freezing. Such data have never been obtained before and have direct application for the development of space-based solar dynamic (SD) power systems. These power systems will store energy in a thermal energy salt such as lithium fluoride or a eutectic of lithium fluoride/calcium difluoride. The stored energy is extracted during the shade portion of the orbit. This enables the solar dynamic power system to provide constant electrical power over the entire orbit. Analytical computer codes were developed for predicting performance of a space-based solar dynamic power system. Experimental verification of the analytical predictions were needed prior to using the analytical results for future space power design applications. The four TES flight experiments were to be used to obtain the needed experimental data. This paper will address the flight results from the first and second experiments, TES-1 and 2, in comparison to the predicted results from the Thermal Energy Storage Simulation (TESSIM) analytical computer code. An analysis of the TES-2 data was conducted by Cleveland State University Professor, Mounir Ibrahim. TESSIM validation was based on two types of results; temperature history of various points on the containment vessel and TES material distribution within the vessel upon return from flight. The TESSIM prediction showed close comparison with the flight data. Distribution of the TES material within the vessel was obtained by a tomography imaging process. The frozen TES material was concentrated toward the colder end of the canister. The TESSIM prediction indicated a similar pattern. With agreement between TESSIM and the flight data, a computerized representation was produced to show the movement and behavior of the void during the entire melting and freezing cycles.
Analytic thinking promotes religious disbelief.
Gervais, Will M; Norenzayan, Ara
2012-04-27
Scientific interest in the cognitive underpinnings of religious belief has grown in recent years. However, to date, little experimental research has focused on the cognitive processes that may promote religious disbelief. The present studies apply a dual-process model of cognitive processing to this problem, testing the hypothesis that analytic processing promotes religious disbelief. Individual differences in the tendency to analytically override initially flawed intuitions in reasoning were associated with increased religious disbelief. Four additional experiments provided evidence of causation, as subtle manipulations known to trigger analytic processing also encouraged religious disbelief. Combined, these studies indicate that analytic processing is one factor (presumably among several) that promotes religious disbelief. Although these findings do not speak directly to conversations about the inherent rationality, value, or truth of religious beliefs, they illuminate one cognitive factor that may influence such discussions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.
In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less
Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.; ...
2017-12-05
In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less
ERIC Educational Resources Information Center
Ginway, M. Elizabeth
2013-01-01
This study focuses on some of the classical features of Rubem Fonseca's "A grande arte" (1983) in order to emphasize the puzzle-solving tradition of the detective novel that is embedded within Fonseca's crime thriller, producing a work that does not entirely fit into traditional divisions of detective, hardboiled, or crime…
The analyst's participation in the analytic process.
Levine, H B
1994-08-01
The analyst's moment-to-moment participation in the analytic process is inevitably and simultaneously determined by at least three sets of considerations. These are: (1) the application of proper analytic technique; (2) the analyst's personally-motivated responses to the patient and/or the analysis; (3) the analyst's use of him or herself to actualise, via fantasy, feeling or action, some aspect of the patient's conflicts, fantasies or internal object relationships. This formulation has relevance to our view of actualisation and enactment in the analytic process and to our understanding of a series of related issues that are fundamental to our theory of technique. These include the dialectical relationships that exist between insight and action, interpretation and suggestion, empathy and countertransference, and abstinence and gratification. In raising these issues, I do not seek to encourage or endorse wild analysis, the attempt to supply patients with 'corrective emotional experiences' or a rationalisation for acting out one's countertransferences. Rather, it is my hope that if we can better appreciate and describe these important dimensions of the analytic encounter, we can be better prepared to recognise, understand and interpret the continual streams of actualisation and enactment that are embedded in the analytic process. A deeper appreciation of the nature of the analyst's participation in the analytic process and the dimensions of the analytic process to which that participation gives rise may offer us a limited, although important, safeguard against analytic impasse.
González, Natalia; Grünhut, Marcos; Šrámková, Ivana; Lista, Adriana G; Horstkotte, Burkhard; Solich, Petr; Sklenářová, Hana; Acebal, Carolina C
2018-02-01
A fully automated spectrophotometric method based on flow-batch analysis has been developed for the determination of clenbuterol including an on-line solid phase extraction using a molecularly imprinted polymer (MIP) as the sorbent. The molecularly imprinted solid phase extraction (MISPE) procedure allowed analyte extraction from complex matrices at low concentration levels and with high selectivity towards the analyte. The MISPE procedure was performed using a commercial MIP cartridge that was introduced into a guard column holder and integrated in the analyzer system. Optimized parameters included the volume of the sample, the type and volume of the conditioning and washing solutions, and the type and volume of the eluent. Quantification of clenbuterol was carried out by spectrophotometry after in-system post-elution analyte derivatization based on azo-coupling using N- (1-Naphthyl) ethylenediamine as the coupling agent to yield a red-colored compound with maximum absorbance at 500nm. Both the chromogenic reaction and spectrophotometric detection were performed in a lab-made flow-batch mixing chamber that replaced the cuvette holder of the spectrophotometer. The calibration curve was linear in the 0.075-0.500mgL -1 range with a correlation coefficient of 0.998. The precision of the proposed method was evaluated in terms of the relative standard deviation obtaining 1.1% and 3.0% for intra-day precision and inter-day precision, respectively. The detection limit was 0.021mgL -1 and the sample throughput for the entire process was 3.4h -1 . The proposed method was applied for the determination of CLB in human urine and milk substitute samples obtaining recoveries values within a range of 94.0-100.0%. Copyright © 2017 Elsevier B.V. All rights reserved.
Students' science process skill and analytical thinking ability in chemistry learning
NASA Astrophysics Data System (ADS)
Irwanto, Rohaeti, Eli; Widjajanti, Endang; Suyanta
2017-08-01
Science process skill and analytical thinking ability are needed in chemistry learning in 21st century. Analytical thinking is related with science process skill which is used by students to solve complex and unstructured problems. Thus, this research aims to determine science process skill and analytical thinking ability of senior high school students in chemistry learning. The research was conducted in Tiga Maret Yogyakarta Senior High School, Indonesia, at the middle of the first semester of academic year 2015/2016 is using the survey method. The survey involved 21 grade XI students as participants. Students were given a set of test questions consists of 15 essay questions. The result indicated that the science process skill and analytical thinking ability were relatively low ie. 30.67%. Therefore, teachers need to improve the students' cognitive and psychomotor domains effectively in learning process.
(Bio)Sensing Using Nanoparticle Arrays: On the Effect of Analyte Transport on Sensitivity.
Lynn, N Scott; Homola, Jiří
2016-12-20
There has recently been an extensive amount of work regarding the development of optical, electrical, and mechanical (bio)sensors employing planar arrays of surface-bound nanoparticles. The sensor output for these systems is dependent on the rate at which analyte is transported to, and interacts with, each nanoparticle in the array. There has so far been little discussion on the relationship between the design parameters of an array and the interplay of convection, diffusion, and reaction. Moreover, current methods providing such information require extensive computational simulation. Here we demonstrate that the rate of analyte transport to a nanoparticle array can be quantified analytically. We show that such rates are bound by both the rate to a single NP and that to a planar surface (having equivalent size as the array), with the specific rate determined by the fill fraction: the ratio between the total surface area used for biomolecular capture with respect to the entire sensing area. We characterize analyte transport to arrays with respect to changes in numerous parameters relevant to experiment, including variation of the nanoparticle shape and size, packing density, flow conditions, and analyte diffusivity. We also explore how analyte capture is dependent on the kinetic parameters related to an affinity-based biosensor, and furthermore, we classify the conditions under which the array might be diffusion- or reaction-limited. The results obtained herein are applicable toward the design and optimization of all (bio)sensors based on nanoparticle arrays.
A new method for constructing analytic elements for groundwater flow.
NASA Astrophysics Data System (ADS)
Strack, O. D.
2007-12-01
The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.
PepsNMR for 1H NMR metabolomic data pre-processing.
Martin, Manon; Legat, Benoît; Leenders, Justine; Vanwinsberghe, Julien; Rousseau, Réjane; Boulanger, Bruno; Eilers, Paul H C; De Tullio, Pascal; Govaerts, Bernadette
2018-08-17
In the analysis of biological samples, control over experimental design and data acquisition procedures alone cannot ensure well-conditioned 1 H NMR spectra with maximal information recovery for data analysis. A third major element affects the accuracy and robustness of results: the data pre-processing/pre-treatment for which not enough attention is usually devoted, in particular in metabolomic studies. The usual approach is to use proprietary software provided by the analytical instruments' manufacturers to conduct the entire pre-processing strategy. This widespread practice has a number of advantages such as a user-friendly interface with graphical facilities, but it involves non-negligible drawbacks: a lack of methodological information and automation, a dependency of subjective human choices, only standard processing possibilities and an absence of objective quality criteria to evaluate pre-processing quality. This paper introduces PepsNMR to meet these needs, an R package dedicated to the whole processing chain prior to multivariate data analysis, including, among other tools, solvent signal suppression, internal calibration, phase, baseline and misalignment corrections, bucketing and normalisation. Methodological aspects are discussed and the package is compared to the gold standard procedure with two metabolomic case studies. The use of PepsNMR on these data shows better information recovery and predictive power based on objective and quantitative quality criteria. Other key assets of the package are workflow processing speed, reproducibility, reporting and flexibility, graphical outputs and documented routines. Copyright © 2018 Elsevier B.V. All rights reserved.
An Analytical Hierarchy Process Model for the Evaluation of College Experimental Teaching Quality
ERIC Educational Resources Information Center
Yin, Qingli
2013-01-01
Taking into account the characteristics of college experimental teaching, through investigaton and analysis, evaluation indices and an Analytical Hierarchy Process (AHP) model of experimental teaching quality have been established following the analytical hierarchy process method, and the evaluation indices have been given reasonable weights. An…
2017-08-01
of metallic additive manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics...manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics methods will accelerate that...geometries, we develop a methodology that couples experimental data and modelling to convert the scan paths into spatially resolved local thermal histories
Kokis, Judite V; Macpherson, Robyn; Toplak, Maggie E; West, Richard F; Stanovich, Keith E
2002-09-01
Developmental and individual differences in the tendency to favor analytic responses over heuristic responses were examined in children of two different ages (10- and 11-year-olds versus 13-year-olds), and of widely varying cognitive ability. Three tasks were examined that all required analytic processing to override heuristic processing: inductive reasoning, deductive reasoning under conditions of belief bias, and probabilistic reasoning. Significant increases in analytic responding with development were observed on the first two tasks. Cognitive ability was associated with analytic responding on all three tasks. Cognitive style measures such as actively open-minded thinking and need for cognition explained variance in analytic responding on the tasks after variance shared with cognitive ability had been controlled. The implications for dual-process theories of cognition and cognitive development are discussed.
De Neys, Wim
2006-06-01
Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D; Goodall, John R
2014-01-01
Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit processmore » recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.« less
Rathore, Anurag Singh; Sobacke, S E; Kocot, T J; Morgan, D R; Dufield, R L; Mozier, N M
2003-08-21
Analyses of crude samples from biotechnology processes are often required in order to demonstrate that residual host cell impurities are reduced or eliminated during purification. In later stages of development, as the processes are further developed and finalized, there is a tremendous volume of testing required to confirm the absence of residual host cell proteins (HCP) and DNA. Analytical tests for these components are very challenging since (1). they may be present at levels that span a million-fold range, requiring substantial dilutions; (2). are not a single component, often existing as fragments and a variety of structures; (3). require high sensitivity for final steps in process; and (4). are present in very complex matrices including other impurities, the product, buffers, salts and solvents. Due to the complex matrices and the variety of potential analytes, the methods of analysis are not truly quantitative for all species. Although these limitations are well known, the assays are still very much in demand since they are required for approval of new products. Methods for final products, described elsewhere, focus on approaches to achieve regulatory requirements. The study described herein will describe the technical rationale for measuring the clearance of HCP and DNA in the entire bioprocessing to purification from an Escherichia coli-derived expression system. Three analytical assays, namely, reversed-phase high-performance liquid chromatography (RP-HPLC), enzyme-linked immunosorbent assay (ELISA), and Threshold Total DNA Assay, were utilized to quantify the protein product, HCP and DNA, respectively. Product quantification is often required for yield estimation and is useful since DNA and HCP results are best expressed as a ratio to product for calculation of relative purification factors. The recombinant E. coli were grown to express the protein of interest as insoluble inclusion bodies (IB) within the cells. The IB were isolated by repeated homogenization and centrifugation and the inclusion body slurry (IBS) was solubilized with urea. After refolding the product, the solution was loaded on several commonly used ion exchangers (CM, SP, DEAE, and Q). Product was eluted in a salt gradient mode and fractions were collected and analyzed for product, HCP and DNA. The IBS used for this study contained about 15 mg/ml product, 38 mg/ml HCP and 1.1 mg/ml DNA. Thus, the relative amounts of HCP and DNA in the IBS was excessive, and about 10(3) times greater than typical (because the cells and IB were not processed with the normal number of washing steps during isolation). This was of interest since similar samples may be encountered when working with non-inclusion body systems, such as periplasmic expressions, or in cases where the upstream unit operations under-perform in IB cleaning. The study described herein describes the development of three robust methods that provide the essential process data needed. These findings are of general interest to other projects since applications of similar analytical technology may be used as a tool to develop processes, evaluate clearance of impurities, and produce a suitable product.
A study of coherence of soft gluons in hadron jets
NASA Astrophysics Data System (ADS)
Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, I. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Clarke, P. E. L.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Debu, P.; Deninno, M. M.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D. J. P.; El Mamouni, H.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gaidot, A.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Harrus, I.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Humbert, R.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lehto, M. H.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Ma, J.; Macbeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McNutt, J. R.; McPherson, A. C.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Pansart, J. P.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Pritchard, T. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Singh, P.; Siroli, G. P.; Skuia, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; Van den plas, D.; VanDalen, G. J.; Vasseur, G.; Virtue, C. J.; von der Schmitt, H.; von Krogh, J.; Wagner, A.; Wahl, C.; Ward, C. P.; Ward, D. R.; Waterhouse, J.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wells, P. S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Yoshida, T.; Zeuner, W.; Zorn, G. T.; OPAL Collaboration
1990-09-01
We study the inclusive momentum distribution of charged particles in multihadronic events produced in e +e - annihilations at ECM∼ M(Z 0). We find agreement with the analytical formulae for gluon production that include the phenomena of soft gluon interference. Using data from CM energies between 14 and 91 GeV, we study the dependence of the inclusive momentum distribution on the centre of momentum energy. We find that the analytical formulae describe the data over the entire energy range. Both the momentum distribution at a fixed energy and the change with energy are described by QCD shower Monte Carlo's which include either coherent gluon branchings or string fragmentation. Simple incoherent models with independent fragmentation fail to reproduce the energy dependence and momentum spectra.
Topology versus Anderson localization: Nonperturbative solutions in one dimension
NASA Astrophysics Data System (ADS)
Altland, Alexander; Bagrets, Dmitry; Kamenev, Alex
2015-02-01
We present an analytic theory of quantum criticality in quasi-one-dimensional topological Anderson insulators. We describe these systems in terms of two parameters (g ,χ ) representing localization and topological properties, respectively. Certain critical values of χ (half-integer for Z classes, or zero for Z2 classes) define phase boundaries between distinct topological sectors. Upon increasing system size, the two parameters exhibit flow similar to the celebrated two-parameter flow of the integer quantum Hall insulator. However, unlike the quantum Hall system, an exact analytical description of the entire phase diagram can be given in terms of the transfer-matrix solution of corresponding supersymmetric nonlinear sigma models. In Z2 classes we uncover a hidden supersymmetry, present at the quantum critical point.
Technology-assisted psychoanalysis.
Scharff, Jill Savege
2013-06-01
Teleanalysis-remote psychoanalysis by telephone, voice over internet protocol (VoIP), or videoteleconference (VTC)-has been thought of as a distortion of the frame that cannot support authentic analytic process. Yet it can augment continuity, permit optimum frequency of analytic sessions for in-depth analytic work, and enable outreach to analysands in areas far from specialized psychoanalytic centers. Theoretical arguments against teleanalysis are presented and countered and its advantages and disadvantages discussed. Vignettes of analytic process from teleanalytic sessions are presented, and indications, contraindications, and ethical concerns are addressed. The aim is to provide material from which to judge the authenticity of analytic process supported by technology.
Ammar, T A; Abid, K Y; El-Bindary, A A; El-Sonbati, A Z
2015-12-01
Most drinking water industries are closely examining options to maintain a certain level of disinfectant residual through the entire distribution system. Chlorine dioxide is one of the promising disinfectants that is usually used as a secondary disinfectant, whereas the selection of the proper monitoring analytical technique to ensure disinfection and regulatory compliance has been debated within the industry. This research endeavored to objectively compare the performance of commercially available analytical techniques used for chlorine dioxide measurements (namely, chronoamperometry, DPD (N,N-diethyl-p-phenylenediamine), Lissamine Green B (LGB WET) and amperometric titration), to determine the superior technique. The commonly available commercial analytical techniques were evaluated over a wide range of chlorine dioxide concentrations. In reference to pre-defined criteria, the superior analytical technique was determined. To discern the effectiveness of such superior technique, various factors, such as sample temperature, high ionic strength, and other interferences that might influence the performance were examined. Among the four techniques, chronoamperometry technique indicates a significant level of accuracy and precision. Furthermore, the various influencing factors studied did not diminish the technique's performance where it was fairly adequate in all matrices. This study is a step towards proper disinfection monitoring and it confidently assists engineers with chlorine dioxide disinfection system planning and management.
Multianalyte detection using a capillary-based flow immunosensor.
Narang, U; Gauger, P R; Kusterbeck, A W; Ligler, F S
1998-01-01
A highly sensitive, dual-analyte detection system using capillary-based immunosensors has been designed for explosive detection. This model system consists of two capillaries, one coated with antibodies specific for 2,4,6-trinitrotoluene (TNT) and the other specific for hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) combined into a single device. The fused silica capillaries are prepared by coating anti-TNT and anti-RDX antibodies onto the silanized inner walls using a hetero-bifunctional crosslinker. After immobilization, the antibodies are saturated with a suitable fluorophorelabeled antigen. A "T" connector is used to continuously flow the buffer solution through the individual capillaries. To perform the assay, an aliquot of TNT or RDX or a mixture of the two analytes is injected into the continuous flow stream. In each capillary, the target analyte displaces the fluorophore-labeled antigen from the binding pocket of the antibody. The labeled antigen displaced from either capillary is detected downstream using two portable spectrofluorometers. The limits of detection for TNT and RDX in the multi-analyte formate are 44 fmol (100 microliters of 0.1 ng/ml TNT solution) and 224 fmol (100 microliters of 0.5 ng/ml RDX solution), respectively. The entire assay for both analytes can be performed in less than 3 min.
ERIC Educational Resources Information Center
Zupanc, Darko; Urank, Matjaz; Bren, Matevz
2009-01-01
From 1995, data on students' achievement in schools (i.e., teacher's grades) and all data on achievement in the 5-subject group certificate--the "Matura" exam--have been systematically gathered for the entire yearly cohort of students in upper secondary education in Slovenia. This paper describes an on-line data selection system and data…
Rapid Building Assessment Project
2014-05-01
ongoing management of commercial energy efficiency. No other company offers all of these proven services on a seamless, integrated Software -as-a- Service ...FirstFuel has added a suite of additional Software -as-a- Service analytics capabilities to support the entire energy efficiency lifecycle, including...the client side. In this document, we refer to the service side software as “BUILDER” and the client software as “BuilderRED,” following the Army
Modeling of soil water retention from saturation to oven dryness
Rossi, Cinzia; Nimmo, John R.
1994-01-01
Most analytical formulas used to model moisture retention in unsaturated porous media have been developed for the wet range and are unsuitable for applications in which low water contents are important. We have developed two models that fit the entire range from saturation to oven dryness in a practical and physically realistic way with smooth, continuous functions that have few parameters. Both models incorporate a power law and a logarithmic dependence of water content on suction, differing in how these two components are combined. In one model, functions are added together (model “sum”); in the other they are joined smoothly together at a discrete point (model “junction”). Both models also incorporate recent developments that assure a continuous derivative and force the function to reach zero water content at a finite value of suction that corresponds to oven dryness. The models have been tested with seven sets of water retention data that each cover nearly the entire range. The three-parameter sum model fits all data well and is useful for extrapolation into the dry range when data for it are unavailable. The two-parameter junction model fits most data sets almost as well as the sum model and has the advantage of being analytically integrable for convenient use with capillary-bundle models to obtain the unsaturated hydraulic conductivity.
Analytical Design Package (ADP2): A computer aided engineering tool for aircraft transparency design
NASA Technical Reports Server (NTRS)
Wuerer, J. E.; Gran, M.; Held, T. W.
1994-01-01
The Analytical Design Package (ADP2) is being developed as a part of the Air Force Frameless Transparency Program (FTP). ADP2 is an integrated design tool consisting of existing analysis codes and Computer Aided Engineering (CAE) software. The objective of the ADP2 is to develop and confirm an integrated design methodology for frameless transparencies, related aircraft interfaces, and their corresponding tooling. The application of this methodology will generate high confidence for achieving a qualified part prior to mold fabrication. ADP2 is a customized integration of analysis codes, CAE software, and material databases. The primary CAE integration tool for the ADP2 is P3/PATRAN, a commercial-off-the-shelf (COTS) software tool. The open architecture of P3/PATRAN allows customized installations with different applications modules for specific site requirements. Integration of material databases allows the engineer to select a material, and those material properties are automatically called into the relevant analysis code. The ADP2 materials database will be composed of four independent schemas: CAE Design, Processing, Testing, and Logistics Support. The design of ADP2 places major emphasis on the seamless integration of CAE and analysis modules with a single intuitive graphical interface. This tool is being designed to serve and be used by an entire project team, i.e., analysts, designers, materials experts, and managers. The final version of the software will be delivered to the Air Force in Jan. 1994. The Analytical Design Package (ADP2) will then be ready for transfer to industry. The package will be capable of a wide range of design and manufacturing applications.
Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC): User Guide. Version 3
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Bednarcyk, B. A.; Wilt, T. E.; Trowbridge, D.
1999-01-01
The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code, MAC, who's predictive capability rests entirely upon the fully analytical generalized method of cells, GMC, micromechanics model is described. MAC/ GMC is a versatile form of research software that "drives" the double or triply periodic micromechanics constitutive models based upon GMC. MAC/GMC enhances the basic capabilities of GMC by providing a modular framework wherein 1) various thermal, mechanical (stress or strain control) and thermomechanical load histories can be imposed, 2) different integration algorithms may be selected, 3) a variety of material constitutive models (both deformation and life) may be utilized and/or implemented, and 4) a variety of fiber architectures (both unidirectional, laminate and woven) may be easily accessed through their corresponding representative volume elements contained within the supplied library of RVEs or input directly by the user, and 5) graphical post processing of the macro and/or micro field quantities is made available.
Intersubjectivity and the creation of meaning in the analytic process.
Maier, Christian
2014-11-01
By means of a clinical illustration, the author describes how the intersubjective exchanges involved in an analytic process facilitate the representation of affects and memories which have been buried in the unconscious or indeed have never been available to consciousness. As a result of projective identificatory processes in the analytic relationship, in this example the analyst falls into a situation of helplessness which connects with his own traumatic experiences. Then he gets into a formal regression of the ego and responds with a so-to-speak hallucinatory reaction-an internal image which enables him to keep the analytic process on track and, later on, to construct an early traumatic experience of the analysand. © 2014, The Society of Analytical Psychology.
NASA Astrophysics Data System (ADS)
Wörner, M.; Cai, X.; Alla, H.; Yue, P.
2018-03-01
The Cox–Voinov law on dynamic spreading relates the difference between the cubic values of the apparent contact angle (θ) and the equilibrium contact angle to the instantaneous contact line speed (U). Comparing spreading results with this hydrodynamic wetting theory requires accurate data of θ and U during the entire process. We consider the case when gravitational forces are negligible, so that the shape of the spreading drop can be closely approximated by a spherical cap. Using geometrical dependencies, we transform the general Cox law in a semi-analytical relation for the temporal evolution of the spreading radius. Evaluating this relation numerically shows that the spreading curve becomes independent from the gas viscosity when the latter is less than about 1% of the drop viscosity. Since inertia may invalidate the made assumptions in the initial stage of spreading, a quantitative criterion for the time when the spherical-cap assumption is reasonable is derived utilizing phase-field simulations on the spreading of partially wetting droplets. The developed theory allows us to compare experimental/computational spreading curves for spherical-cap shaped droplets with Cox theory without the need for instantaneous data of θ and U. Furthermore, the fitting of Cox theory enables us to estimate the effective slip length. This is potentially useful for establishing relationships between slip length and parameters in numerical methods for moving contact lines.
Boysen, Angela K; Heal, Katherine R; Carlson, Laura T; Ingalls, Anitra E
2018-01-16
The goal of metabolomics is to measure the entire range of small organic molecules in biological samples. In liquid chromatography-mass spectrometry-based metabolomics, formidable analytical challenges remain in removing the nonbiological factors that affect chromatographic peak areas. These factors include sample matrix-induced ion suppression, chromatographic quality, and analytical drift. The combination of these factors is referred to as obscuring variation. Some metabolomics samples can exhibit intense obscuring variation due to matrix-induced ion suppression, rendering large amounts of data unreliable and difficult to interpret. Existing normalization techniques have limited applicability to these sample types. Here we present a data normalization method to minimize the effects of obscuring variation. We normalize peak areas using a batch-specific normalization process, which matches measured metabolites with isotope-labeled internal standards that behave similarly during the analysis. This method, called best-matched internal standard (B-MIS) normalization, can be applied to targeted or untargeted metabolomics data sets and yields relative concentrations. We evaluate and demonstrate the utility of B-MIS normalization using marine environmental samples and laboratory grown cultures of phytoplankton. In untargeted analyses, B-MIS normalization allowed for inclusion of mass features in downstream analyses that would have been considered unreliable without normalization due to obscuring variation. B-MIS normalization for targeted or untargeted metabolomics is freely available at https://github.com/IngallsLabUW/B-MIS-normalization .
Analytical and experimental study of flow phenomena in noncavitating rocket pump inducers
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.
1981-01-01
The flow processes in rocket pump inducers are summarized. The experimental investigations were carried out with air as the test medium. The major characteristics features of the rocket pump inducers are low flow coefficient (0.05 to 0.2) large stagger angle (70 deg to 85 deg) and high solidity blades of little or no camber. The investigations are concerned with the effect of viscosity not the effects of cavitation. Flow visualization, conventional and hot wire probe measurement inside and at the exit of the blade passage, were the analytical methods used. The experiment was carried out using four three and two bladed inducers with cambered blades. Both the passage and the exit flow were measured. The basic research and boundary layer investigation was carried out using a helical flat plate (of some dimensions as the inducer blades tested), and flat plate helical inducer (four bladed). Detailed mean and turbulence flow field inside the passage as well as the exit of the rotor were derived from these measurement. The boundary layer, endwall, and other passage data reveal extremely complex nature of the flow, with major effects of viscosity present across the entire passage. Several analyses were carried out to predict the flow field in inducers. These included an approximate analysis, the shear pumping analysis, and a numerical solution of exact viscous equations with approximate modeling for the viscous terms.
NASA Astrophysics Data System (ADS)
Zhang, Hui; Wang, Deqing; Wu, Wenjun; Hu, Hongping
2012-11-01
In today's business environment, enterprises are increasingly under pressure to process the vast amount of data produced everyday within enterprises. One method is to focus on the business intelligence (BI) applications and increasing the commercial added-value through such business analytics activities. Term weighting scheme, which has been used to convert the documents as vectors in the term space, is a vital task in enterprise Information Retrieval (IR), text categorisation, text analytics, etc. When determining term weight in a document, the traditional TF-IDF scheme sets weight value for the term considering only its occurrence frequency within the document and in the entire set of documents, which leads to some meaningful terms that cannot get the appropriate weight. In this article, we propose a new term weighting scheme called Term Frequency - Function of Document Frequency (TF-FDF) to address this issue. Instead of using monotonically decreasing function such as Inverse Document Frequency, FDF presents a convex function that dynamically adjusts weights according to the significance of the words in a document set. This function can be manually tuned based on the distribution of the most meaningful words which semantically represent the document set. Our experiments show that the TF-FDF can achieve higher value of Normalised Discounted Cumulative Gain in IR than that of TF-IDF and its variants, and improving the accuracy of relevance ranking of the IR results.
Lismont, Jasmien; Janssens, Anne-Sophie; Odnoletkova, Irina; Vanden Broucke, Seppe; Caron, Filip; Vanthienen, Jan
2016-10-01
The aim of this study is to guide healthcare instances in applying process analytics on healthcare processes. Process analytics techniques can offer new insights in patient pathways, workflow processes, adherence to medical guidelines and compliance with clinical pathways, but also bring along specific challenges which will be examined and addressed in this paper. The following methodology is proposed: log preparation, log inspection, abstraction and selection, clustering, process mining, and validation. It was applied on a case study in the type 2 diabetes mellitus domain. Several data pre-processing steps are applied and clarify the usefulness of process analytics in a healthcare setting. Healthcare utilization, such as diabetes education, is analyzed and compared with diabetes guidelines. Furthermore, we take a look at the organizational perspective and the central role of the GP. This research addresses four challenges: healthcare processes are often patient and hospital specific which leads to unique traces and unstructured processes; data is not recorded in the right format, with the right level of abstraction and time granularity; an overflow of medical activities may cloud the analysis; and analysts need to deal with data not recorded for this purpose. These challenges complicate the application of process analytics. It is explained how our methodology takes them into account. Process analytics offers new insights into the medical services patients follow, how medical resources relate to each other and whether patients and healthcare processes comply with guidelines and regulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Azari, Z.; Pappalettere, C.
2015-01-01
The behaviour of materials is governed by the surrounding environment. The contact area between the material and the surrounding environment is the likely spot where different forms of degradation, particularly rust, may be generated. A rust prevention treatment, like bluing, inhibitors, humidity control, coatings, and galvanization, will be necessary. The galvanization process aims to protect the surface of the material by depositing a layer of metallic zinc by either hot-dip galvanizing or electroplating. In the hot-dip galvanizing process, a metallic bond between steel and metallic zinc is obtained by immersing the steel in a zinc bath at a temperature of around 460°C. Although the hot-dip galvanizing procedure is recognized to be one of the most effective techniques to combat corrosion, cracks can arise in the intermetallic δ layer. These cracks can affect the life of the coated material and decrease the lifetime service of the entire structure. In the present paper the mechanical response of hot-dip galvanized steel submitted to mechanical loading condition is investigated. Experimental tests were performed and corroborative numerical and analytical methods were then applied in order to describe both the mechanical behaviour and the processes of crack/cracks propagation in a bimaterial as zinc-coated material. PMID:27347531
Pruncu, C I; Azari, Z; Casavola, C; Pappalettere, C
2015-01-01
The behaviour of materials is governed by the surrounding environment. The contact area between the material and the surrounding environment is the likely spot where different forms of degradation, particularly rust, may be generated. A rust prevention treatment, like bluing, inhibitors, humidity control, coatings, and galvanization, will be necessary. The galvanization process aims to protect the surface of the material by depositing a layer of metallic zinc by either hot-dip galvanizing or electroplating. In the hot-dip galvanizing process, a metallic bond between steel and metallic zinc is obtained by immersing the steel in a zinc bath at a temperature of around 460°C. Although the hot-dip galvanizing procedure is recognized to be one of the most effective techniques to combat corrosion, cracks can arise in the intermetallic δ layer. These cracks can affect the life of the coated material and decrease the lifetime service of the entire structure. In the present paper the mechanical response of hot-dip galvanized steel submitted to mechanical loading condition is investigated. Experimental tests were performed and corroborative numerical and analytical methods were then applied in order to describe both the mechanical behaviour and the processes of crack/cracks propagation in a bimaterial as zinc-coated material.
NASA Astrophysics Data System (ADS)
Das Bhowmik, R.; Arumugam, S.
2015-12-01
Multivariate downscaling techniques exhibited superiority over univariate regression schemes in terms of preserving cross-correlations between multiple variables- precipitation and temperature - from GCMs. This study focuses on two aspects: (a) develop an analytical solutions on estimating biases in cross-correlations from univariate downscaling approaches and (b) quantify the uncertainty in land-surface states and fluxes due to biases in cross-correlations in downscaled climate forcings. Both these aspects are evaluated using climate forcings available from both historical climate simulations and CMIP5 hindcasts over the entire US. The analytical solution basically relates the univariate regression parameters, co-efficient of determination of regression and the co-variance ratio between GCM and downscaled values. The analytical solutions are compared with the downscaled univariate forcings by choosing the desired p-value (Type-1 error) in preserving the observed cross-correlation. . For quantifying the impacts of biases on cross-correlation on estimating streamflow and groundwater, we corrupt the downscaled climate forcings with different cross-correlation structure.
NASA Astrophysics Data System (ADS)
Rim, Jung H.
Accurate and fast determination of the activity of radionuclides in a sample is critical for nuclear forensics and emergency response. Radioanalytical techniques are well established for radionuclides measurement, however, they are slow and labor intensive, requiring extensive radiochemical separations and purification prior to analysis. With these limitations of current methods, there is great interest for a new technique to rapidly process samples. This dissertation describes a new analyte extraction medium called Polymer Ligand Film (PLF) developed to rapidly extract radionuclides. Polymer Ligand Film is a polymer medium with ligands incorporated in its matrix that selectively and rapidly extract analytes from a solution. The main focus of the new technique is to shorten and simplify the procedure necessary to chemically isolate radionuclides for determination by alpha spectrometry or beta counting. Five different ligands were tested for plutonium extraction: bis(2-ethylhexyl) methanediphosphonic acid (H2DEH[MDP]), di(2-ethyl hexyl) phosphoric acid (HDEHP), trialkyl methylammonium chloride (Aliquat-336), 4,4'(5')-di-t-butylcyclohexano 18-crown-6 (DtBuCH18C6), and 2-ethylhexyl 2-ethylhexylphosphonic acid (HEH[EHP]). The ligands that were effective for plutonium extraction further studied for uranium extraction. The plutonium recovery by PLFs has shown dependency on nitric acid concentration and ligand to total mass ratio. H2DEH[MDP] PLFs performed best with 1:10 and 1:20 ratio PLFs. 50.44% and 47.61% of plutonium were extracted on the surface of PLFs with 1M nitric acid for 1:10 and 1:20 PLF, respectively. HDEHP PLF provided the best combination of alpha spectroscopy resolution and plutonium recovery with 1:5 PLF when used with 0.1M nitric acid. The overall analyte recovery was lower than electrodeposited samples, which typically has recovery above 80%. However, PLF is designed to be a rapid field deployable screening technique and consistency is more important than recovery. PLFs were also tested using blind quality control samples and the activities were accurately measured. It is important to point out that PLFs were consistently susceptible to analytes penetrating and depositing below the surface. The internal radiation within the body of PLF is mostly contained and did not cause excessive self-attenuation and peak broadening in alpha spectroscopy. The analyte penetration issue was beneficial in the destructive analysis. H2DEH[MDP] PLF was tested with environmental samples to fully understand the capabilities and limitations of the PLF in relevant environments. The extraction system was very effective in extracting plutonium from environmental water collected from Mortandad Canyon at Los Alamos National Laboratory with minimal sample processing. Soil samples were tougher to process than the water samples. Analytes were first leached from the soil matrixes using nitric acid before processing with PLF. This approach had a limitation in extracting plutonium using PLF. The soil samples from Mortandad Canyon, which are about 1% iron by weight, were effectively processed with the PLF system. Even with certain limitations of the PLF extraction system, this technique was able to considerably decrease the sample analysis time. The entire environmental sample was analyzed within one to two days. The decrease in time can be attributed to the fact that PLF is replacing column chromatography and electrodeposition with a single step for preparing alpha spectrometry samples. The two-step process of column chromatography and electrodeposition takes a couple days to a week to complete depending on the sample. The decrease in time and the simplified procedure make this technique a unique solution for application to nuclear forensics and emergency response. A large number of samples can be quickly analyzed and selective samples can be further analyzed with more sensitive techniques based on the initial data. The deployment of a PLF system as a screening method will greatly reduce a total analysis time required to gain meaningful isotopic data for the nuclear forensics application. (Abstract shortened by UMI.)
The Human is the Loop: New Directions for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren
2014-01-28
Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.
Analytic and heuristic processes in the detection and resolution of conflict.
Ferreira, Mário B; Mata, André; Donkin, Christopher; Sherman, Steven J; Ihmels, Max
2016-10-01
Previous research with the ratio-bias task found larger response latencies for conflict trials where the heuristic- and analytic-based responses are assumed to be in opposition (e.g., choosing between 1/10 and 9/100 ratios of success) when compared to no-conflict trials where both processes converge on the same response (e.g., choosing between 1/10 and 11/100). This pattern is consistent with parallel dual-process models, which assume that there is effective, rather than lax, monitoring of the output of heuristic processing. It is, however, unclear why conflict resolution sometimes fails. Ratio-biased choices may increase because of a decline in analytical reasoning (leaving heuristic-based responses unopposed) or to a rise in heuristic processing (making it more difficult for analytic processes to override the heuristic preferences). Using the process-dissociation procedure, we found that instructions to respond logically and response speed affected analytic (controlled) processing (C), leaving heuristic processing (H) unchanged, whereas the intuitive preference for large nominators (as assessed by responses to equal ratio trials) affected H but not C. These findings create new challenges to the debate between dual-process and single-process accounts, which are discussed.
Focused analyte spray emission apparatus and process for mass spectrometric analysis
Roach, Patrick J [Kennewick, WA; Laskin, Julia [Richland, WA; Laskin, Alexander [Richland, WA
2012-01-17
An apparatus and process are disclosed that deliver an analyte deposited on a substrate to a mass spectrometer that provides for trace analysis of complex organic analytes. Analytes are probed using a small droplet of solvent that is formed at the junction between two capillaries. A supply capillary maintains the droplet of solvent on the substrate; a collection capillary collects analyte desorbed from the surface and emits analyte ions as a focused spray to the inlet of a mass spectrometer for analysis. The invention enables efficient separation of desorption and ionization events, providing enhanced control over transport and ionization of the analyte.
Risk analysis for renewable energy projects due to constraints arising
NASA Astrophysics Data System (ADS)
Prostean, G.; Vasar, C.; Prostean, O.; Vartosu, A.
2016-02-01
Starting from the target of the European Union (EU) to use renewable energy in the area that aims a binding target of 20% renewable energy in final energy consumption by 2020, this article illustrates the identification of risks for implementation of wind energy projects in Romania, which could lead to complex technical implications, social and administrative. In specific projects analyzed in this paper were identified critical bottlenecks in the future wind power supply chain and reasonable time periods that may arise. Renewable energy technologies have to face a number of constraints that delayed scaling-up their production process, their transport process, the equipment reliability, etc. so implementing these types of projects requiring complex specialized team, the coordination of which also involve specific risks. The research team applied an analytical risk approach to identify major risks encountered within a wind farm project developed in Romania in isolated regions with different particularities, configured for different geographical areas (hill and mountain locations in Romania). Identification of major risks was based on the conceptual model set up for the entire project implementation process. Throughout this conceptual model there were identified specific constraints of such process. Integration risks were examined by an empirical study based on the method HAZOP (Hazard and Operability). The discussion describes the analysis of our results implementation context of renewable energy projects in Romania and creates a framework for assessing energy supply to any entity from renewable sources.
Microprobe monazite geochronology: new techniques for dating deformation and metamorphism
NASA Astrophysics Data System (ADS)
Williams, M.; Jercinovic, M.; Goncalves, P.; Mahan, K.
2003-04-01
High-resolution compositional mapping, age mapping, and precise dating of monazite on the electron microprobe are powerful additions to microstructural and petrologic analysis and important tools for tectonic studies. The in-situ nature and high spatial resolution of the technique offer an entirely new level of structurally and texturally specific geochronologic data that can be used to put absolute time constraints on P-T-D paths, constrain the rates of sedimentary, metamorphic, and deformational processes, and provide new links between metamorphism and deformation. New analytical techniques (including background modeling, sample preparation, and interference analysis) have significantly improved the precision and accuracy of the technique and new mapping and image analysis techniques have increased the efficiency and strengthened the correlation with fabrics and textures. Microprobe geochronology is particularly applicable to three persistent microstructural-microtextural problem areas: (1) constraining the chronology of metamorphic assemblages; (2) constraining the timing of deformational fabrics; and (3) interpreting other geochronological results. In addition, authigenic monazite can be used to date sedimentary basins, and detrital monazite can fingerprint sedimentary source areas, both critical for tectonic analysis. Although some monazite generations can be directly tied to metamorphism or deformation, at present, the most common constraints rely on monazite inclusion relations in porphyroblasts that, in turn, can be tied to the deformation and/or metamorphic history. Examples will be presented from deep-crustal rocks of northern Saskatchewan and from mid-crustal rocks from the southwestern USA. Microprobe monazite geochronology has been used in both regions to deconvolute overprinting deformation and metamorphic events and to clarify the interpretation of other geochronologic data. Microprobe mapping and dating are powerful companions to mass spectroscopic dating techniques. They allow geochronology to be incorporated into the microstructural analytical process, resulting in a new level of integration of time (t) into P-T-D histories.
Kasahara, Kota; Kinoshita, Kengo
2016-01-01
Ion conduction mechanisms of ion channels are a long-standing conundrum. Although the molecular dynamics (MD) method has been extensively used to simulate ion conduction dynamics at the atomic level, analysis and interpretation of MD results are not straightforward due to complexity of the dynamics. In our previous reports, we proposed an analytical method called ion-binding state analysis to scrutinize and summarize ion conduction mechanisms by taking advantage of a variety of analytical protocols, e.g., the complex network analysis, sequence alignment, and hierarchical clustering. This approach effectively revealed the ion conduction mechanisms and their dependence on the conditions, i.e., ion concentration and membrane voltage. Here, we present an easy-to-use computational toolkit for ion-binding state analysis, called IBiSA_tools. This toolkit consists of a C++ program and a series of Python and R scripts. From the trajectory file of MD simulations and a structure file, users can generate several images and statistics of ion conduction processes. A complex network named ion-binding state graph is generated in a standard graph format (graph modeling language; GML), which can be visualized by standard network analyzers such as Cytoscape. As a tutorial, a trajectory of a 50 ns MD simulation of the Kv1.2 channel is also distributed with the toolkit. Users can trace the entire process of ion-binding state analysis step by step. The novel method for analysis of ion conduction mechanisms of ion channels can be easily used by means of IBiSA_tools. This software is distributed under an open source license at the following URL: http://www.ritsumei.ac.jp/~ktkshr/ibisa_tools/.
A genetic algorithm-based job scheduling model for big data analytics.
Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei
Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.
Analysis of Slabs-on-Grade for a Variety of Loading and Support Conditions.
1984-12-01
applications, namely "- the problem of a slab-on-grade, as encountered in the analysis and design of rigid pavements. - ". This is one of the few...proper design and construction methods are adhered to. There are several additional reasons, entirely due to recent developments, that warrant the...conservative designs led to almost imperceptible pavement deformations, thus warranting the term "rigid pavements". Modern-day analytical techniques
Analytical, Characterization, and Stability Studies of Organic Chemical, Drugs, and Drug Formulation
2014-05-21
stability studies was maintained over the entire contract period to ensure the continued integrity of the drug in its clinical use . Because our...facile automation. We demonstrated the method in principle, but were unable to remove the residual t-butanol to ɘ.5%. With additional research using ...to its use of ethylene oxide for sterilization, which is done in small batches. The generally recognized method of choice to produce a parenteral
Kimura, S; Yamakami-Kimura, M; Obata, Y; Hase, K; Kitamura, H; Ohno, H; Iwanaga, T
2015-05-01
The microfold (M) cell residing in the follicle-associated epithelium is a specialized epithelial cell that initiates mucosal immune responses by sampling luminal antigens. The differentiation process of M cells remains unclear due to limitations of analytical methods. Here we found that M cells were classified into two functionally different subtypes based on the expression of Glycoprotein 2 (GP2) by newly developed image cytometric analysis. GP2-high M cells actively took up luminal microbeads, whereas GP2-negative or low cells scarcely ingested them, even though both subsets equally expressed the other M-cell signature genes, suggesting that GP2-high M cells represent functionally mature M cells. Further, the GP2-high mature M cells were abundant in Peyer's patch but sparse in the cecal patch: this was most likely due to a decrease in the nuclear translocation of RelB, a downstream transcription factor for the receptor activator of nuclear factor-κB signaling. Given that murine cecum contains a protrusion of beneficial commensals, the restriction of M-cell activity might contribute to preventing the onset of any excessive immune response to the commensals through decelerating the M-cell-dependent uptake of microorganisms.
Development of microtitre plates for electrokinetic assays
NASA Astrophysics Data System (ADS)
Burt, J. P. H.; Goater, A. D.; Menachery, A.; Pethig, R.; Rizvi, N. H.
2007-02-01
Electrokinetic processes have wide ranging applications in microsystems technology. Their optimum performance at micro and nano dimensions allows their use both as characterization and diagnostic tools and as a means of general particle manipulation. Within analytical studies, measurement of the electrokinesis of biological cells has the sensitivity and selectivity to distinguish subtle differences between cell types and cells undergoing changes and is gaining acceptance as a diagnostic tool in high throughput screening for drug discovery applications. In this work the development and manufacture of an electrokinetic-based microtitre plate is described. The plate is intended to be compatible with automated sample loading and handling systems. Manufacturing of the microtitre plate, which employs indium tin oxide microelectrodes, has been entirely undertaken using excimer and ultra-fast pulsed laser micromachining due to its flexibility in materials processing and accuracy in microstructuring. Laser micromachining has the ability to rapidly realize iterations in device prototype design while also having the capability to be scaled up for large scale manufacture. Device verification is achieved by the measurement of the electrorotation and dielectrophoretic properties of yeast cells while the flexibility of the developed microtitre plate is demonstrated by the selective separation of live yeast from polystyrene microbeads.
Atmospheric Spray Freeze-Drying: Numerical Modeling and Comparison With Experimental Measurements.
Borges Sebastião, Israel; Robinson, Thomas D; Alexeenko, Alina
2017-01-01
Atmospheric spray freeze-drying (ASFD) represents a novel approach to dry thermosensitive solutions via sublimation. Tests conducted with a second-generation ASFD equipment, developed for pharmaceutical applications, have focused initially on producing a light, fine, high-grade powder consistently and reliably. To better understand the heat and mass transfer physics and drying dynamics taking place within the ASFD chamber, 3 analytical models describing the key processes are developed and validated. First, by coupling the dynamics and heat transfer of single droplets sprayed into the chamber, the velocity, temperature, and phase change evolutions of these droplets are estimated for actual operational conditions. This model reveals that, under typical operational conditions, the sprayed droplets require less than 100 ms to freeze. Second, because understanding the heat transfer throughout the entire freeze-drying process is so important, a theoretical model is proposed to predict the time evolution of the chamber gas temperature. Finally, a drying model, calibrated with hygrometer measurements, is used to estimate the total time required to achieve a predefined final moisture content. Results from these models are compared with experimental data. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Waste Bank Revitalization in Palabuhanratu West Java
NASA Astrophysics Data System (ADS)
Samadikun, Budi Prasetyo; Handayani, Dwi Siwi; Laksana, Muhamad Permana
2018-02-01
Palabuhanratu Village has three waste banks, one of them was established since 2010, the others built in 2016. However, waste processing from the source is still not optimal, it's only reduced waste about 5% of the total waste generated to the final waste disposal site. The performance of waste banks is still minimal, because one waste bank can not serve the entire area of the village. Furthermore, organic waste processed by some communities of Palabuhanratu Village to be compost can not be a mass movement, due to the lack of public knowledge. The purpose of this research is to know the existing condition of waste management in Palabuhanratu Village and to formulate the revitalization of existing waste bank. The research used survey research method by using questionnaire, in depth interview, and observation. Analytical technique using quantitative and qualitative analysis. The findings of the research indicate that the residents of Palabuhanratu Village who often do waste sorting from the source only from the residents of RT 01 / RW 33. The number of existing temporary waste disposal site in Palabuhanratu Village is still lacking, so it requires addition up to 5 units that integrated with waste bank in this village.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edlund, Jeffrey A.; Tinto, Massimo; Krolak, Andrzej
LISA (Laser Interferometer Space Antenna) is a proposed space mission, which will use coherent laser beams exchanged between three remote spacecraft to detect and study low-frequency cosmic gravitational radiation. In the low part of its frequency band, the LISA strain sensitivity will be dominated by the incoherent superposition of hundreds of millions of gravitational wave signals radiated by inspiraling white-dwarf binaries present in our own Galaxy. In order to estimate the magnitude of the LISA response to this background, we have simulated a synthesized population that recently appeared in the literature. Our approach relies on entirely analytic expressions of themore » LISA time-delay interferometric responses to the gravitational radiation emitted by such systems, which allows us to implement a computationally efficient and accurate simulation of the background in the LISA data. We find the amplitude of the galactic white-dwarf binary background in the LISA data to be modulated in time, reaching a minimum equal to about twice that of the LISA noise for a period of about two months around the time when the Sun-LISA direction is roughly oriented towards the Autumn equinox. This suggests that, during this time period, LISA could search for other gravitational wave signals incoming from directions that are away from the galactic plane. Since the galactic white-dwarf background will be observed by LISA not as a stationary but rather as a cyclostationary random process with a period of 1 yr, we summarize the theory of cyclostationary random processes, present the corresponding generalized spectral method needed to characterize such process, and make a comparison between our analytic results and those obtained by applying our method to the simulated data. We find that, by measuring the generalized spectral components of the white-dwarf background, LISA will be able to infer properties of the distribution of the white-dwarf binary systems present in our Galaxy.« less
Dobbin, Kevin K; Cesano, Alessandra; Alvarez, John; Hawtin, Rachael; Janetzki, Sylvia; Kirsch, Ilan; Masucci, Giuseppe V; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Zhang, Jenny; Butterfield, Lisa H; Thurin, Magdalena
2016-01-01
There is growing recognition that immunotherapy is likely to significantly improve health outcomes for cancer patients in the coming years. Currently, while a subset of patients experience substantial clinical benefit in response to different immunotherapeutic approaches, the majority of patients do not but are still exposed to the significant drug toxicities. Therefore, a growing need for the development and clinical use of predictive biomarkers exists in the field of cancer immunotherapy. Predictive cancer biomarkers can be used to identify the patients who are or who are not likely to derive benefit from specific therapeutic approaches. In order to be applicable in a clinical setting, predictive biomarkers must be carefully shepherded through a step-wise, highly regulated developmental process. Volume I of this two-volume document focused on the pre-analytical and analytical phases of the biomarker development process, by providing background, examples and "good practice" recommendations. In the current Volume II, the focus is on the clinical validation, validation of clinical utility and regulatory considerations for biomarker development. Together, this two volume series is meant to provide guidance on the entire biomarker development process, with a particular focus on the unique aspects of developing immune-based biomarkers. Specifically, knowledge about the challenges to clinical validation of predictive biomarkers, which has been gained from numerous successes and failures in other contexts, will be reviewed together with statistical methodological issues related to bias and overfitting. The different trial designs used for the clinical validation of biomarkers will also be discussed, as the selection of clinical metrics and endpoints becomes critical to establish the clinical utility of the biomarker during the clinical validation phase of the biomarker development. Finally, the regulatory aspects of submission of biomarker assays to the U.S. Food and Drug Administration as well as regulatory considerations in the European Union will be covered.
Atrial Model Development and Prototype Simulations: CRADA Final Report on Tasks 3 and 4
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, T.; Zhang, X.; Villongco, C.
2016-10-28
The goal of this CRADA was to develop essential tools needed to simulate human atrial electrophysiology in 3-dimensions using an anatomical image-based anatomy and physiologically detailed human cellular model. The atria were modeled as anisotropic, representing the preferentially longitudinal electrical coupling between myocytes. Across the entire anatomy, cellular electrophysiology was heterogeneous, with left and right atrial myocytes defined differently. Left and right cell types for the “control” case of sinus rhythm (SR) was compared with remodeled electrophysiology and calcium cycling characteristics of chronic atrial fibrillation (cAF). The effects of Isoproterenol (ISO), a beta-adrenergic agonist that represents the functional consequences ofmore » PKA phosphorylation of various ion channels and transporters, was also simulated in SR and cAF to represent atrial activity under physical or emotional stress. Results and findings from Tasks 3 & 4 are described. Tasks 3 and 4 are, respectively: Input parameters prepared for a Cardioid simulation; Report including recommendations for additional scenario development and post-processing analytic strategy.« less
Electron microscopy study of the iron meteorite Santa Catharina
NASA Technical Reports Server (NTRS)
Zhang, J.; Williams, D. B.; Goldstein, J. I.; Clarke, R. S., Jr.
1990-01-01
A characterization of the microstructural features of Santa Catharina (SC) from the millimeter to submicron scale is presented. The same specimen was examined using an optical microscope, a scanning electron microscope, an electron probe microanalyzer, and an analytical electron microscope. Findings include the fact that SC metal nodules may have different bulk Ni values, leading to different microstructures upon cooling; that SC USNM 6293 is the less corroded sample, as tetrataenite exists as less than 10 nm ordered domains throughout the entire fcc matrix (it is noted that this structure is the same as that of the Twin City meteorite and identical to clear taenite II in the retained taenite regions of the octahedrites); that SC USNM 3043 has a more complicated microstructure due to corrosion; and that the low Ni phase of the cloudy zone was selectively corroded in some areas and formed the dark regions, indicating that the SC meteorite corrosion process was electrochemical in nature and may involve Cl-containing akaganeite.
NASA Astrophysics Data System (ADS)
Hidouri, T.; Saidi, F.; Maaref, H.; Rodriguez, Ph.; Auvray, L.
2016-12-01
In this paper, we report on the experimental and theoretical study of BInGaAs/GaAs Single Quantum Well elaborated by Metal Organic Chemical Vapor Deposition (MOCVD). We carried out the photoluminescence (PL) peak energy temperature-dependence over a temperature range of 10-300 K. It shows the S-shaped behavior as a result of a competition process between localized and delocalized states. We simulate the peak evolution by the empirical model and modified models. The first one is limited at high PL temperature. For the second one, a correction due to the thermal redistribution based on the Localized State Ensemble model (LSE). The new fit gives a good agreement between theoretical and experimental data in the entire temperature range. Furthermore, we have investigated an approximate analytical expressions and interpretation for the entropy and enthalpy of formation of electron-hole pairs in quaternary BInGaAs/GaAs SQW.
Roussi, Pagona; Sherman, Kerry A; Miller, Suzanne M; Hurley, Karen; Daly, Mary B; Godwin, Andrew; Buzaglo, Joanne S; Wen, Kuang-Yi
2011-10-01
Based on the cognitive-social health information processing model, we identified cognitive profiles of women at risk for breast and ovarian cancer. Prior to genetic counselling, participants (N = 171) completed a study questionnaire concerning their cognitive and affective responses to being at genetic risk. Using cluster analysis, four cognitive profiles were generated: (a) high perceived risk/low coping; (b) low value of screening/high expectancy of cancer; (c) moderate perceived risk/moderate efficacy of prevention/low informativeness of test result; and (d) high efficacy of prevention/high coping. The majority of women in Clusters One, Two and Three had no personal history of cancer, whereas Cluster Four consisted almost entirely of women affected with cancer. Women in Cluster One had the highest number of affected relatives and experienced higher levels of distress than women in the other three clusters. These results highlight the need to consider the psychological profile of women undergoing genetic testing when designing counselling interventions and messages.
An internally consistent gamma ray burst time history phenomenology
NASA Technical Reports Server (NTRS)
Cline, T. L.
1985-01-01
A phenomenology for gamma ray burst time histories is outlined. Order of their generally chaotic appearance is attempted, based on the speculation that any one burst event can be represented above 150 keV as a superposition of similarly shaped increases of varying intensity. The increases can generally overlap, however, confusing the picture, but a given event must at least exhibit its own limiting characteristic rise and decay times if the measurements are made with instruments having adequate temporal resolution. Most catalogued observations may be of doubtful or marginal utility to test this hypothesis, but some time histories from Helios-2, Pioneer Venus Orbiter and other instruments having one-to several-millisecond capabilities appear to provide consistency. Also, recent studies of temporally resolved Solar Maximum Mission burst energy spectra are entirely compatible with this picture. The phenomenology suggested here, if correct, may assist as an analytic tool for modelling of burst processes and possibly in the definition of burst source populations.
Control of polarization rotation in nonlinear propagation of fully structured light
NASA Astrophysics Data System (ADS)
Gibson, Christopher J.; Bevington, Patrick; Oppo, Gian-Luca; Yao, Alison M.
2018-03-01
Knowing and controlling the spatial polarization distribution of a beam is of importance in applications such as optical tweezing, imaging, material processing, and communications. Here we show how the polarization distribution is affected by both linear and nonlinear (self-focusing) propagation. We derive an analytical expression for the polarization rotation of fully structured light (FSL) beams during linear propagation and show that the observed rotation is due entirely to the difference in Gouy phase between the two eigenmodes comprising the FSL beams, in excellent agreement with numerical simulations. We also explore the effect of cross-phase modulation due to a self-focusing (Kerr) nonlinearity and show that polarization rotation can be controlled by changing the eigenmodes of the superposition, and physical parameters such as the beam size, the amount of Kerr nonlinearity, and the input power. Finally, we show that by biasing cylindrical vector beams to have elliptical polarization, we can vary the polarization state from radial through spiral to azimuthal using nonlinear propagation.
Meta-Analysis for Sociology – A Measure-Driven Approach
Roelfs, David J.; Shor, Eran; Falzon, Louise; Davidson, Karina W.; Schwartz, Joseph E.
2013-01-01
Meta-analytic methods are becoming increasingly important in sociological research. In this article we present an approach for meta-analysis which is especially helpful for sociologists. Conventional approaches to meta-analysis often prioritize “concept-driven” literature searches. However, in disciplines with high theoretical diversity, such as sociology, this search approach might constrain the researcher’s ability to fully exploit the entire body of relevant work. We explicate a “measure-driven” approach, in which iterative searches and new computerized search techniques are used to increase the range of publications found (and thus the range of possible analyses) and to traverse time and disciplinary boundaries. We demonstrate this measure-driven search approach with two meta-analytic projects, examining the effects of various social variables on all-cause mortality. PMID:24163498
Recent progress in plasmonic colour filters for image sensor and multispectral applications
NASA Astrophysics Data System (ADS)
Pinton, Nadia; Grant, James; Choubey, Bhaskar; Cumming, David; Collins, Steve
2016-04-01
Using nanostructured thin metal films as colour filters offers several important advantages, in particular high tunability across the entire visible spectrum and some of the infrared region, and also compatibility with conventional CMOS processes. Since 2003, the field of plasmonic colour filters has evolved rapidly and several different designs and materials, or combination of materials, have been proposed and studied. In this paper we present a simulation study for a single- step lithographically patterned multilayer structure able to provide competitive transmission efficiencies above 40% and contemporary FWHM of the order of 30 nm across the visible spectrum. The total thickness of the proposed filters is less than 200 nm and is constant for every wavelength, unlike e.g. resonant cavity-based filters such as Fabry-Perot that require a variable stack of several layers according to the working frequency, and their passband characteristics are entirely controlled by changing the lithographic pattern. It will also be shown that a key to obtaining narrow-band optical response lies in the dielectric environment of a nanostructure and that it is not necessary to have a symmetric structure to ensure good coupling between the SPPs at the top and bottom interfaces. Moreover, an analytical method to evaluate the periodicity, given a specific structure and a desirable working wavelength, will be proposed and its accuracy demonstrated. This method conveniently eliminate the need to optimize the design of a filter numerically, i.e. by running several time-consuming simulations with different periodicities.
NASA Astrophysics Data System (ADS)
Purss, M. B.; Lewis, A.; Ip, A.; Evans, B.
2013-12-01
The next decade promises an exponential increase in volumes of open data from Earth observing satellites. The ESA Sentinels, the Japan Meteorological Agency's Himawari 8/9 geostationary satellites, various NASA missions, and of course the many EO satellites planned from China, will produce petabyte scale datasets of national and global significance. It is vital that we develop new ways of managing, accessing and using this ';big-data' from satellites, to produce value added information within realistic timeframes. A paradigm shift is required away from traditional ';scene based' (and labour intensive) approaches with data storage and delivery for processing at local sites, to emerging High Performance Data (HPD) models where the data are organised and co-located with High Performance Computational (HPC) infrastructures in a way that enables users to bring themselves, their algorithms and the HPC processing power to the data. Automated workflows, that allow the entire archive of data to be rapidly reprocessed from raw data to fully calibrated products, are a crucial requirement for the effective stewardship of these datasets. New concepts such as arranging and viewing data as ';data objects' which underpin the delivery of ';information as a service' are also integral to realising the transition into HPD analytics. As Australia's national remote sensing and geoscience agency, Geoscience Australia faces a pressing need to solve the problems of ';big-data', in particular around the 25-year archive of calibrated Landsat data. The challenge is to ensure standardised information can be extracted from the entire archive and applied to nationally significant problems in hazards, water management, land management, resource development and the environment. Ultimately, these uses justify government investment in these unique systems. A key challenge was how best to organise the archive of calibrated Landsat data (estimated to grow to almost 1 PB by the end of 2014) in a way that supports HPD applications yet with the ability to trace each observation (pixel) back to its original satellite acquisition. The approach taken was to develop a multi-dimensional array (a data cube) underpinned by the partitioning the data into tiles, without any temporal aggregation. This allows for flexible spatio-temporal queries of the archive whilst minimising the need to perform geospatial processing just to locate the pixels of interest. Equally important is the development and implementation of international data interoperability standards (such as OGC web services and ISO metadata standards) that will provide advanced access for users to interact with and query the data cube without needing to download any data or to go through specialised data portals. This new approach will vastly improve access to, and the impact of, Australia's Landsat archive holdings.
Data Provenance in Photogrammetry Through Documentation Protocols
NASA Astrophysics Data System (ADS)
Carboni, N.; Bruseker, G.; Guillem, A.; Bellido Castañeda, D.; Coughenour, C.; Domajnko, M.; de Kramer, M.; Ramos Calles, M. M.; Stathopoulou, E. K.; Suma, R.
2016-06-01
Documenting the relevant aspects in digitisation processes such as photogrammetry in order to provide a robust provenance for their products continues to present a challenge. The creation of a product that can be re-used scientifically requires a framework for consistent, standardised documentation of the entire digitisation pipeline. This article provides an analysis of the problems inherent to such goals and presents a series of protocols to document the various steps of a photogrammetric workflow. We propose this pipeline, with descriptors to track all phases of digital product creation in order to assure data provenance and enable the validation of the operations from an analytic and production perspective. The approach aims to support adopters of the workflow to define procedures with a long term perspective. The conceptual schema we present is founded on an analysis of information and actor exchanges in the digitisation process. The metadata were defined through the synthesis of previous proposals in this area and were tested on a case study. We performed the digitisation of a set of cultural heritage artefacts from an Iron Age burial in Ilmendorf, Germany. The objects were captured and processed using different techniques, including a comparison of different imaging tools and algorithms. This augmented the complexity of the process allowing us to test the flexibility of the schema for documenting complex scenarios. Although we have only presented a photogrammetry digitisation scenario, we claim that our schema is easily applicable to a multitude of 3D documentation processes.
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
NASA Technical Reports Server (NTRS)
Graf, John
2015-01-01
NASA has been developing and testing two different types of oxygen separation systems. One type of oxygen separation system uses pressure swing technology, the other type uses a solid electrolyte electrochemical oxygen separation cell. Both development systems have been subjected to long term testing, and performance testing under a variety of environmental and operational conditions. Testing these two systems revealed that measuring the product purity of oxygen, and determining if an oxygen separation device meets Aviator's Breathing Oxygen (ABO) specifications is a subtle and sometimes difficult analytical chemistry job. Verifying product purity of cryogenically produced oxygen presents a different set of analytical chemistry challenges. This presentation will describe some of the sample acquisition and analytical chemistry challenges presented by verifying oxygen produced by an oxygen separator - and verifying oxygen produced by cryogenic separation processes. The primary contaminant that causes gas samples to fail to meet ABO requirements is water. The maximum amount of water vapor allowed is 7 ppmv. The principal challenge of verifying oxygen produced by an oxygen separator is that it is produced relatively slowly, and at comparatively low temperatures. A short term failure that occurs for just a few minutes in the course of a 1 week run could cause an entire tank to be rejected. Continuous monitoring of oxygen purity and water vapor could identify problems as soon as they occur. Long term oxygen separator tests were instrumented with an oxygen analyzer and with an hygrometer: a GE Moisture Monitor Series 35. This hygrometer uses an aluminum oxide sensor. The user's manual does not report this, but long term exposure to pure oxygen causes the aluminum oxide sensor head to bias dry. Oxygen product that exceeded the 7 ppm specification was improperly accepted, because the sensor had biased. The bias is permanent - exposure to air does not cause the sensor to return to its original response - but the bias can be accounted for by recalibrating the sensor. After this issue was found, continuous measurements of water vapor in the oxygen product were made using an FTIR. The FTIR cell is relatively large, so response time is slow - but moisture measurements were repeatable and accurate. Verifying ABO compliance for oxygen produced by commercial cryogenic processes has a different set of sample acquisition and analytical chemistry challenges. Customers want analytical chemists to conserve as much as possible. Hygrometers are not exposed to hours of continuous flow of oxygen, so they don't bias, but small amounts of contamination in valves can cause a "fail". K bottles are periodically cleaned and recertified - after cleaning residual moisture can cause a "fail". Operators let bottle pressure drop to room pressure, introduce outside air into the bottle, and the subsequent fill will "fail". Outside storage of K-bottles has allowed enough in-leakage, so contents will "fail".
Wong, Quincy J J; Moulds, Michelle L
2012-12-01
Evidence from the depression literature suggests that an analytical processing mode adopted during repetitive thinking leads to maladaptive outcomes relative to an experiential processing mode. To date, in socially anxious individuals, the impact of processing mode during repetitive thinking related to an actual social-evaluative situation has not been investigated. We thus tested whether an analytical processing mode would be maladaptive relative to an experiential processing mode during anticipatory processing and post-event rumination. High and low socially anxious participants were induced to engage in either an analytical or experiential processing mode during: (a) anticipatory processing before performing a speech (Experiment 1; N = 94), or (b) post-event rumination after performing a speech (Experiment 2; N = 74). Mood, cognition, and behavioural measures were employed to examine the effects of processing mode. For high socially anxious participants, the modes had a similar effect on self-reported anxiety during both anticipatory processing and post-event rumination. Unexpectedly, relative to the analytical mode, the experiential mode led to stronger high standard and conditional beliefs during anticipatory processing, and stronger unconditional beliefs during post-event rumination. These experiments are the first to investigate processing mode during anticipatory processing and post-event rumination. Hence, these results are novel and will need to be replicated. These findings suggest that an experiential processing mode is maladaptive relative to an analytical processing mode during repetitive thinking characteristic of socially anxious individuals. Copyright © 2012 Elsevier Ltd. All rights reserved.
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
NRL 1989 Beam Propagation Studies in Support of the ATA Multi-Pulse Propagation Experiment
1990-08-31
papers presented here were all written prior to the completion of the experiment. The first of these papers presents simulation results which modeled ...beam stability and channel evolution for an entire five pulse burst. The second paper describes a new air chemistry model used in the SARLAC...Experiment: A new air chemistry model for use in the propagation codes simulating the MPPE was developed by making analytic fits to benchmark runs with
Jaskolla, Thorsten W; Karas, Michael
2011-06-01
This work experimentally verifies and proves the two long since postulated matrix-assisted laser desorption/ionization (MALDI) analyte protonation pathways known as the Lucky Survivor and the gas phase protonation model. Experimental differentiation between the predicted mechanisms becomes possible by the use of deuterated matrix esters as MALDI matrices, which are stable under typical sample preparation conditions and generate deuteronated reagent ions, including the deuterated and deuteronated free matrix acid, only upon laser irradiation in the MALDI process. While the generation of deuteronated analyte ions proves the gas phase protonation model, the detection of protonated analytes by application of deuterated matrix compounds without acidic hydrogens proves the survival of analytes precharged from solution in accordance with the predictions from the Lucky Survivor model. The observed ratio of the two analyte ionization processes depends on the applied experimental parameters as well as the nature of analyte and matrix. Increasing laser fluences and lower matrix proton affinities favor gas phase protonation, whereas more quantitative analyte protonation in solution and intramolecular ion stabilization leads to more Lucky Survivors. The presented results allow for a deeper understanding of the fundamental processes causing analyte ionization in MALDI and may alleviate future efforts for increasing the analyte ion yield.
Kaabia, Z; Dervilly-Pinel, G; Popot, M A; Bailly-Chouriberry, L; Plou, P; Bonnaire, Y; Le Bizec, B
2014-04-01
Nandrolone (17β-hydroxy-4-estren-3-one) is amongst the most misused endogenous steroid hormones in entire male horses. The detection of such a substance is challenging with regard to its endogenous presence. The current international threshold level for nandrolone misuse is based on the urinary concentration ratio of 5α-estrane-3β,17α-diol (EAD) to 5(10)-estrene-3β,17α-diol (EED). This ratio, however, can be influenced by a number of factors due to existing intra- and inter-variability standing, respectively, for the variation occurring in endogenous steroids concentration levels in a single subject and the variation in those same concentration levels observed between different subjects. Targeting an efficient detection of nandrolone misuse in entire male horses, an analytical strategy was set up in order to profile a group of endogenous steroids in nandrolone-treated and non-treated equines. Experiment plasma and urine samples were steadily collected over more than three months from a stallion administered with nandrolone laurate (1 mg/kg). Control plasma and urine samples were collected monthly from seven non-treated stallions over a one-year period. A large panel of steroids of interest (n = 23) were extracted from equine urine and plasma samples using a C18 cartridge. Following a methanolysis step, liquid-liquid and solid-phase extractions purifications were performed before derivatization and analysis on gas chromatography-tandem mass spectrometry (GC-MS/MS) for quantification. Statistical processing of the collected data permitted to establish statistical models capable of discriminating control samples from those collected during the three months following administration. Furthermore, these statistical models succeeded in predicting the compliance status of additional samples collected from racing horses. Copyright © 2013 John Wiley & Sons, Ltd.
Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity
Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.
2010-01-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.
Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L
2010-02-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.
An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application
ERIC Educational Resources Information Center
Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth
2016-01-01
Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…
ERIC Educational Resources Information Center
Follette, William C.; Bonow, Jordan T.
2009-01-01
Whether explicitly acknowledged or not, behavior-analytic principles are at the heart of most, if not all, empirically supported therapies. However, the change process in psychotherapy is only now being rigorously studied. Functional analytic psychotherapy (FAP; Kohlenberg & Tsai, 1991; Tsai et al., 2009) explicitly identifies behavioral-change…
Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J
2001-08-01
The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messaris, Gerasimos A. T., E-mail: messaris@upatras.gr; School of Science and Technology, Hellenic Open University, 11 Sahtouri Street, GR 262 22 Patras; Hadjinicolaou, Maria
The present work is motivated by the fact that blood flow in the aorta and the main arteries is governed by large finite values of the Womersley number α and for such values of α there is not any analytical solution in the literature. The existing numerical solutions, although accurate, give limited information about the factors that affect the flow, whereas an analytical approach has an advantage in that it can provide physical insight to the flow mechanism. Having this in mind, we seek analytical solution to the equations of the fluid flow driven by a sinusoidal pressure gradient inmore » a slightly curved pipe of circular cross section when the Womersley number varies from small finite to infinite values. Initially the equations of motion are expanded in terms of the curvature ratio δ and the resulting linearized equations are solved analytically in two ways. In the first, we match the solution for the main core to that for the Stokes boundary layer. This solution is valid for very large values of α. In the second, we derive a straightforward single solution valid to the entire flow region and for 8 ≤ α < ∞, a range which includes the values of α that refer to the physiological flows. Each solution contains expressions for the axial velocity, the stream function, and the wall stresses and is compared to the analogous forms presented in other studies. The two solutions give identical results to each other regarding the axial flow but differ in the secondary flow and the circumferential wall stress, due to the approximations employed in the matched asymptotic expansion process. The results on the stream function from the second solution are in agreement with analogous results from other numerical solutions. The second solution predicts that the atherosclerotic plaques may develop in any location around the cross section of the aortic wall unlike to the prescribed locations predicted by the first solution. In addition, it gives circumferential wall stresses augmented by approximately 100% with respect to the matched asymptotic expansions, a factor that may contribute jointly with other pathological factors to the faster aging of the arterial system and the possible malfunction of the aorta.« less
NASA Astrophysics Data System (ADS)
Messaris, Gerasimos A. T.; Hadjinicolaou, Maria; Karahalios, George T.
2016-08-01
The present work is motivated by the fact that blood flow in the aorta and the main arteries is governed by large finite values of the Womersley number α and for such values of α there is not any analytical solution in the literature. The existing numerical solutions, although accurate, give limited information about the factors that affect the flow, whereas an analytical approach has an advantage in that it can provide physical insight to the flow mechanism. Having this in mind, we seek analytical solution to the equations of the fluid flow driven by a sinusoidal pressure gradient in a slightly curved pipe of circular cross section when the Womersley number varies from small finite to infinite values. Initially the equations of motion are expanded in terms of the curvature ratio δ and the resulting linearized equations are solved analytically in two ways. In the first, we match the solution for the main core to that for the Stokes boundary layer. This solution is valid for very large values of α. In the second, we derive a straightforward single solution valid to the entire flow region and for 8 ≤ α < ∞, a range which includes the values of α that refer to the physiological flows. Each solution contains expressions for the axial velocity, the stream function, and the wall stresses and is compared to the analogous forms presented in other studies. The two solutions give identical results to each other regarding the axial flow but differ in the secondary flow and the circumferential wall stress, due to the approximations employed in the matched asymptotic expansion process. The results on the stream function from the second solution are in agreement with analogous results from other numerical solutions. The second solution predicts that the atherosclerotic plaques may develop in any location around the cross section of the aortic wall unlike to the prescribed locations predicted by the first solution. In addition, it gives circumferential wall stresses augmented by approximately 100% with respect to the matched asymptotic expansions, a factor that may contribute jointly with other pathological factors to the faster aging of the arterial system and the possible malfunction of the aorta.
NASA Technical Reports Server (NTRS)
Choudhari, Meelan
1992-01-01
Acoustic receptivity of a Blasius boundary layer in the presence of distributed surface irregularities is investigated analytically. It is shown that, out of the entire spatial spectrum of the surface irregularities, only a small band of Fourier components can lead to an efficient conversion of the acoustic input at any given frequency to an unstable eigenmode of the boundary layer flow. The location, and width, of this most receptive band of wavenumbers corresponds to a relative detuning of O(R sub l.b.(exp -3/8)) with respect to the lower-neutral instability wavenumber at the frequency under consideration, R sub l.b. being the Reynolds number based on a typical boundary-layer thickness at the lower branch of the neutral stability curve. Surface imperfections in the form of discrete mode waviness in this range of wavenumbers lead to initial instability amplitudes which are O(R sub l.b.(exp 3/8)) larger than those caused by a single, isolated roughness element. In contrast, irregularities with a continuous spatial spectrum produce much smaller instability amplitudes, even compared to the isolated case, since the increase due to the resonant nature of the response is more than that compensated for by the asymptotically small band-width of the receptivity process. Analytical expressions for the maximum possible instability amplitudes, as well as their expectation for an ensemble of statistically irregular surfaces with random phase distributions, are also presented.
Analytic Wave Functions for the Half-Filled Lowest Landau Level
NASA Astrophysics Data System (ADS)
Ciftja, Orion
We consider a two-dimensional strongly correlated electronic system in a strong perpendicular magnetic field at half-filling of the lowest Landau level (LLL). We seek to build a wave function that, by construction, lies entirely in the Hilbert space of the LLL. Quite generally, a wave function of this nature can be built as a linear combination of all possible Slater determinants formed by using the complete set of single-electron states that belong to the LLL. However, due to the vast number of Slater determinant states required to form such basis functions, the expansion is impractical for any but the smallest systems. Thus, in practice, the expansion must be truncated to a small number of Slater determinants. Among many possible LLL Slater determinant states, we note a particular special class of such wave functions in which electrons occupy either only even, or only odd angular momentum states. We focus on such a class of wave functions and obtain analytic expressions for various quantities of interest. Results seem to suggest that these special wave functions, while interesting and physically appealing, are unlikely to be a very good approximation for the exact ground state at half-filling factor. The overall quality of the description can be improved by including other additional LLL Slater determinant states. It is during this process that we identify another special family of suitable LLL Slater determinant states to be used in an enlarged expansion.
Urine and oral fluid drug testing in support of pain management.
Kwong, Tai C; Magnani, Barbarajean; Moore, Christine
2017-09-01
In recent years, the abuse of opioid drugs has resulted in greater prevalence of addiction, overdose, and deaths attributable to opioid abuse. The epidemic of opioid abuse has prompted professional and government agencies to issue practice guidelines for prescribing opioids to manage chronic pain. An important tool available to providers is the drug test for use in the initial assessment of patients for possible opioid therapy, subsequent monitoring of compliance, and documentation of suspected aberrant drug behaviors. This review discusses the issues that most affect the clinical utility of drug testing in chronic pain management with opioid therapy. It focuses on the two most commonly used specimen matrices in drug testing: urine and oral fluid. The advantages and disadvantages of urine and oral fluid in the entire testing process, from specimen collection and analytical methodologies to result interpretation are reviewed. The analytical sensitivity and specificity limitations of immunoassays used for testing are examined in detail to draw attention to how these shortcomings can affect result interpretation and influence clinical decision-making in pain management. The need for specific identification and quantitative measurement of the drugs and metabolites present to investigate suspected aberrant drug behavior or unexpected positive results is analyzed. Also presented are recent developments in optimization of test menus and testing strategies, such as the modification of the standard screen and reflexed-confirmation testing model by eliminating some of the initial immunoassay-based tests and proceeding directly to definitive testing by mass spectrometry assays.
Design of analytical failure detection using secondary observers
NASA Technical Reports Server (NTRS)
Sisar, M.
1982-01-01
The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.
Contractor, Pritesh; Kurani, Hemal; Guttikar, Swati; Shrivastav, Pranav S
2013-09-01
An accurate and precise method was developed and validated using LC-MS/MS to quantify dutasteride in human plasma. The analyte and dutasteride-13C6 as internal standard (IS) were extracted from 300 μL plasma volume using methyl tert-butyl ether-n-hexane (80:20, v/v). Chromatographic analysis was performed on a Gemini C18 (150 × 4.6 mm, 5 µm) column using acetonitrile-5 mm ammonium formate, pH adjusted to 4.0 with formic acid (85:15, v/v) as the mobile phase. Tandem mass spectrometry in positive ionization mode was used to quantify dutasteride by multiple reaction monitoring. The entire data processing was done using Watson LIMS(TM) software, which provided excellent data integrity and high throughput with improved operational efficiency. The calibration curve was linear in the range of 0.1-25 ng/mL, with intra-and inter-batch values for accuracy and precision (coefficient of variation) ranging from 95.8 to 104.0 and from 0.7 to 5.3%, respectively. The mean overall recovery across quality controls was ≥95% for the analyte and IS, while the interference of matrix expressed as IS-normalized matrix factors ranged from 1.01 to 1.02. The method was successfully applied to support a bioequivalence study of 0.5 mg dutasteride capsules in 24 healthy subjects. Assay reproducibility was demonstrated by reanalysis of 103 incurred samples. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Green, H. D.; Contractor, N. S.; Yao, Y.
2006-12-01
A knowledge network is a multi-dimensional network created from the interactions and interconnections among the scientists, documents, data, analytic tools, and interactive collaboration spaces (like forums and wikis) associated with a collaborative environment. CI-KNOW is a suite of software tools that leverages automated data collection, social network theories, analysis techniques and algorithms to infer an individual's interests and expertise based on their interactions and activities within a knowledge network. The CI-KNOW recommender system mines the knowledge network associated with a scientific community's use of cyberinfrastructure tools and uses relational metadata to record connections among entities in the knowledge network. Recent developments in social network theories and methods provide the backbone for a modular system that creates recommendations from relational metadata. A network navigation portlet allows users to locate colleagues, documents, data or analytic tools in the knowledge network and to explore their networks through a visual, step-wise process. An internal auditing portlet offers administrators diagnostics to assess the growth and health of the entire knowledge network. The first instantiation of the prototype CI-KNOW system is part of the Environmental Cyberinfrastructure Demonstration project at the National Center for Supercomputing Applications, which supports the activities of hydrologic and environmental science communities (CLEANER and CUAHSI) under the umbrella of the WATERS network environmental observatory planning activities (http://cleaner.ncsa.uiuc.edu). This poster summarizes the key aspects of the CI-KNOW system, highlighting the key inputs, calculation mechanisms, and output modalities.
NASA Astrophysics Data System (ADS)
Drieniková, Katarína; Hrdinová, Gabriela; Naňo, Tomáš; Sakál, Peter
2010-01-01
The paper deals with the analysis of the theory of corporate social responsibility, risk management and the exact method of analytic hierarchic process that is used in the decision-making processes. The Chapters 2 and 3 focus on presentation of the experience with the application of the method in formulating the stakeholders' strategic goals within the Corporate Social Responsibility (CSR) and simultaneously its utilization in minimizing the environmental risks. The major benefit of this paper is the application of Analytical Hierarchy Process (AHP).
Analysis of mode-locked and intracavity frequency-doubled Nd:YAG laser
NASA Technical Reports Server (NTRS)
Siegman, A. E.; Heritier, J.-M.
1980-01-01
The paper presents analytical and computer studies of the CW mode-locked and intracavity frequency-doubled Nd:YAG laser which provide new insight into the operation, including the detuning behavior, of this type of laser. Computer solutions show that the steady-state pulse shape for this laser is much closer to a truncated cosine than to a Gaussian; there is little spectral broadening for on-resonance operation; and the chirp is negligible. This leads to a simplified analytical model carried out entirely in the time domain, with atomic linewidth effects ignored. Simple analytical results for on-resonance pulse shape, pulse width, signal intensity, and harmonic conversion efficiency in terms of basic laser parameters are derived from this model. A simplified physical description of the detuning behavior is also developed. Agreement is found with experimental studies showing that the pulsewidth decreases as the modulation frequency is detuned off resonance; the harmonic power output initially increases and then decreases; and the pulse shape develops a sharp-edged asymmetry of opposite sense for opposite signs of detuning.
A Review of Current Methods for Analysis of Mycotoxins in Herbal Medicines
Zhang, Lei; Dou, Xiao-Wen; Zhang, Cheng; Logrieco, Antonio F.; Yang, Mei-Hua
2018-01-01
The presence of mycotoxins in herbal medicines is an established problem throughout the entire world. The sensitive and accurate analysis of mycotoxin in complicated matrices (e.g., herbs) typically involves challenging sample pretreatment procedures and an efficient detection instrument. However, although numerous reviews have been published regarding the occurrence of mycotoxins in herbal medicines, few of them provided a detailed summary of related analytical methods for mycotoxin determination. This review focuses on analytical techniques including sampling, extraction, cleanup, and detection for mycotoxin determination in herbal medicines established within the past ten years. Dedicated sections of this article address the significant developments in sample preparation, and highlight the importance of this procedure in the analytical technology. This review also summarizes conventional chromatographic techniques for mycotoxin qualification or quantitation, as well as recent studies regarding the development and application of screening assays such as enzyme-linked immunosorbent assays, lateral flow immunoassays, aptamer-based lateral flow assays, and cytometric bead arrays. The present work provides a good insight regarding the advanced research that has been done and closes with an indication of future demand for the emerging technologies. PMID:29393905
NASA Astrophysics Data System (ADS)
Kamiński, M.; Supeł, Ł.
2016-02-01
It is widely known that lateral-torsional buckling of a member under bending and warping restraints of its cross-sections in the steel structures are crucial for estimation of their safety and durability. Although engineering codes for steel and aluminum structures support the designer with the additional analytical expressions depending even on the boundary conditions and internal forces diagrams, one may apply alternatively the traditional Finite Element or Finite Difference Methods (FEM, FDM) to determine the so-called critical moment representing this phenomenon. The principal purpose of this work is to compare three different ways of determination of critical moment, also in the context of structural sensitivity analysis with respect to the structural element length. Sensitivity gradients are determined by the use of both analytical and the central finite difference scheme here and contrasted also for analytical, FEM as well as FDM approaches. Computational study is provided for the entire family of the steel I- and H - beams available for the practitioners in this area, and is a basis for further stochastic reliability analysis as well as durability prediction including possible corrosion progress.
NASA Technical Reports Server (NTRS)
Brown, Gerald V.; Kascak, Albert F.; Jansen, Ralph H.; Dever, Timothy P.; Duffy, Kirsten P.
2006-01-01
For magnetic-bearing-supported high-speed rotating machines with significant gyroscopic effects, it is necessary to stabilize forward and backward tilt whirling modes. Instability or low damping of these modes can prevent the attainment of desired shaft speed. We show analytically that both modes can be stabilized by using cross-axis proportional gains and high- and low-pass filters in the magnetic bearing controller. Furthermore, at high shaft speeds, where system phase lags degrade the stability of the forward-whirl mode, a phasor advance of the control signal can partially counteract the phase lag. In some range of high shaft speed, the derivative gain for the tilt modes (essential for stability for slowly rotating shafts) can be removed entirely. We show analytically how the tilt eigenvalues depend on shaft speed and on various controller feedback parameters.
Two-dimensional numerical simulation of a Stirling engine heat exchanger
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir; Tew, Roy C.; Dudenhoefer, James E.
1989-01-01
The first phase of an effort to develop multidimensional models of Stirling engine components is described. The ultimate goal is to model an entire engine working space. Parallel plate and tubular heat exchanger models are described, with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects). The model assumes laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations describing the model have been solved using the Crack-Nicloson finite-difference scheme. Model predictions are compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement is obtained for flow both in circular tubes and between parallel plates. The computational heat transfer results are in good agreement with the analytical heat transfer results for parallel plates.
Micro-separation toward systems biology.
Liu, Bi-Feng; Xu, Bo; Zhang, Guisen; Du, Wei; Luo, Qingming
2006-02-17
Current biology is experiencing transformation in logic or philosophy that forces us to reevaluate the concept of cell, tissue or entire organism as a collection of individual components. Systems biology that aims at understanding biological system at the systems level is an emerging research area, which involves interdisciplinary collaborations of life sciences, computational and mathematical sciences, systems engineering, and analytical technology, etc. For analytical chemistry, developing innovative methods to meet the requirement of systems biology represents new challenges as also opportunities and responsibility. In this review, systems biology-oriented micro-separation technologies are introduced for comprehensive profiling of genome, proteome and metabolome, characterization of biomolecules interaction and single cell analysis such as capillary electrophoresis, ultra-thin layer gel electrophoresis, micro-column liquid chromatography, and their multidimensional combinations, parallel integrations, microfabricated formats, and nano technology involvement. Future challenges and directions are also suggested.
Green approach using monolithic column for simultaneous determination of coformulated drugs.
Yehia, Ali M; Mohamed, Heba M
2016-06-01
Green chemistry and sustainability is now entirely encompassed across the majority of pharmaceutical companies and research labs. Researchers' attention is careworn toward implementing the green analytical chemistry principles for more eco-friendly analytical methodologies. Solvents play a dominant role in determining the greenness of the analytical procedure. Using safer solvents, the greenness profile of the methodology could be increased remarkably. In this context, a green chromatographic method has been developed and validated for the simultaneous determination of phenylephrine, paracetamol, and guaifenesin in their ternary pharmaceutical mixture. The chromatographic separation was carried out using monolithic column and green solvents as mobile phase. The use of monolithic column allows efficient separation protocols at higher flow rates, which results in short time of analysis. Two-factor three-level experimental design was used to optimize the chromatographic conditions. The greenness profile of the proposed methodology was assessed using eco-scale as a green metrics and was found to be an excellent green method with regard to the usage and production of hazardous chemicals and solvents, energy consumption, and amount of produced waste. The proposed method improved the environmental impact without compromising the analytical performance criteria and could be used as a safer alternate for the routine analysis of the studied drugs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Experimental Results From the Thermal Energy Storage-1 (TES-1) Flight Experiment
NASA Technical Reports Server (NTRS)
Jacqmin, David
1995-01-01
The Thermal Energy Storage (TES) experiments are designed to provide data to help researchers understand the long-duration microgravity behavior of thermal energy storage fluoride salts that undergo repeated melting and freezing. Such data, which have never been obtained before, have direct application to space-based solar dynamic power systems. These power systems will store solar energy in a thermal energy salt, such as lithium fluoride (LiF) or a eutectic of lithium fluoride/calcium difluoride (LiF-CaF2) (which melts at a lower temperature). The energy will be stored as the latent heat of fusion when the salt is melted by absorbing solar thermal energy. The stored energy will then be extracted during the shade portion of the orbit, enabling the solar dynamic power system to provide constant electrical power over the entire orbit. Analytical computer codes have been developed to predict the performance of a spacebased solar dynamic power system. However, the analytical predictions must be verified experimentally before the analytical results can be used for future space power design applications. Four TES flight experiments will be used to obtain the needed experimental data. This article focuses on the flight results from the first experiment, TES-1, in comparison to the predicted results from the Thermal Energy Storage Simulation (TESSIM) analytical computer code.
Overcoming Intuition: Metacognitive Difficulty Activates Analytic Reasoning
ERIC Educational Resources Information Center
Alter, Adam L.; Oppenheimer, Daniel M.; Epley, Nicholas; Eyre, Rebecca N.
2007-01-01
Humans appear to reason using two processing styles: System 1 processes that are quick, intuitive, and effortless and System 2 processes that are slow, analytical, and deliberate that occasionally correct the output of System 1. Four experiments suggest that System 2 processes are activated by metacognitive experiences of difficulty or disfluency…
Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner
NASA Astrophysics Data System (ADS)
Isnaini, Ismet; Obi, Takashi; Yoshida, Eiji; Yamaya, Taiga
2014-07-01
Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data.
A Self-Critique of Self-Organized Criticality in Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
2015-08-01
The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.
A Self-Critique of Self-Organized Criticality in Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.
BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark.
Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung
2016-05-01
Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact.
Visualization and recommendation of large image collections toward effective sensemaking
NASA Astrophysics Data System (ADS)
Gu, Yi; Wang, Chaoli; Nemiroff, Robert; Kao, David; Parra, Denis
2016-03-01
In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, D. V., E-mail: Dmitri.Alexandrov@usu.ru; Ivanov, A. A.
2009-05-15
The process of solidification of ternary systems in the presence of moving phase transition regions has been investigated theoretically in terms of the nonlinear equation of the liquidus surface. A mathematical model is developed and an approximate analytical solution to the Stefan problem is constructed for a linear temperature profile in two-phase zones. The temperature and impurity concentration distributions are determined, the solid-phase fractions in the phase transition regions are obtained, and the laws of motion of their boundaries are established. It is demonstrated that all boundaries move in accordance with the laws of direct proportionality to the square rootmore » of time, which is a general property of self-similar processes. It is substantiated that the concentration of an impurity of the substance undergoing a phase transition only in the cotectic zone increases in this zone and decreases in the main two-phase zone in which the other component of the substance undergoes a phase transition. In the process, the concentration reaches a maximum at the interface between the main two-phase zone and the cotectic two-phase zone. The revealed laws of motion of the outer boundaries of the entire phase transition region do not depend on the amount of the components under consideration and hold true for crystallization of a multicomponent system.« less
NASA Astrophysics Data System (ADS)
Melas, Evangelos
2011-07-01
The 3+1 (canonical) decomposition of all geometries admitting two-dimensional space-like surfaces is exhibited as a generalization of a previous work. A proposal, consisting of a specific re-normalization Assumption and an accompanying Requirement, which has been put forward in the 2+1 case is now generalized to 3+1 dimensions. This enables the canonical quantization of these geometries through a generalization of Kuchař's quantization scheme in the case of infinite degrees of freedom. The resulting Wheeler-deWitt equation is based on a re-normalized manifold parameterized by three smooth scalar functionals. The entire space of solutions to this equation is analytically given, a fact that is entirely new to the present case. This is made possible by exploiting the freedom left by the imposition of the Requirement and contained in the third functional.
NASA Technical Reports Server (NTRS)
Craig, Larry; Jacobson, Dave; Mosier, Gary; Nein, Max; Page, Timothy; Redding, Dave; Sutherlin, Steve; Wilkerson, Gary
2000-01-01
Advanced space telescopes, which will eventually replace the Hubble Space Telescope (HTS), will have apertures of 8 - 20 n. Primary mirrors of these dimensions will have to be foldable to fit into the space launcher. By necessity these mirrors will be extremely light weight and flexible and the historical approaches to mirror designs, where the mirror is made as rigid as possible to maintain figure and to serve as the anchor for the entire telescope, cannot be applied any longer. New design concepts and verifications will depend entirely on analytical methods to predict optical performance. Finite element modeling of the structural and thermal behavior of such mirrors is becoming the tool for advanced space mirror designs. This paper discusses some of the preliminary tasks and study results, which are currently the basis for the design studies of the Next Generation Space Telescope.
Autonomous microfluidic system for phosphate detection.
McGraw, Christina M; Stitzel, Shannon E; Cleary, John; Slater, Conor; Diamond, Dermot
2007-02-28
Miniaturization of analytical devices through the advent of microfluidics and micro total analysis systems is an important step forward for applications such as medical diagnostics and environmental monitoring. The development of field-deployable instruments requires that the entire system, including all necessary peripheral components, be miniaturized and packaged in a portable device. A sensor for long-term monitoring of phosphate levels has been developed that incorporates sampling, reagent and waste storage, detection, and wireless communication into a complete, miniaturized system. The device employs a low-power detection and communication system, so the entire instrument can operate autonomously for 7 days on a single rechargeable, 12V battery. In addition, integration of a wireless communication device allows the instrument to be controlled and results to be downloaded remotely. This autonomous system has a limit of detection of 0.3mg/L and a linear dynamic range between 0 and 20mg/L.
Three-dimensional Kasteleyn transition: spin ice in a [100] field.
Jaubert, L D C; Chalker, J T; Holdsworth, P C W; Moessner, R
2008-02-15
We examine the statistical mechanics of spin-ice materials with a [100] magnetic field. We show that the approach to saturated magnetization is, in the low-temperature limit, an example of a 3D Kasteleyn transition, which is topological in the sense that magnetization is changed only by excitations that span the entire system. We study the transition analytically and using a Monte Carlo cluster algorithm, and compare our results with recent data from experiments on Dy2Ti2O7.
Parametric study of statistical bias in laser Doppler velocimetry
NASA Technical Reports Server (NTRS)
Gould, Richard D.; Stevenson, Warren H.; Thompson, H. Doyle
1989-01-01
Analytical studies have often assumed that LDV velocity bias depends on turbulence intensity in conjunction with one or more characteristic time scales, such as the time between validated signals, the time between data samples, and the integral turbulence time-scale. These parameters are presently varied independently, in an effort to quantify the biasing effect. Neither of the post facto correction methods employed is entirely accurate. The mean velocity bias error is found to be nearly independent of data validation rate.
Asymptotic Far Field Conditions for Unsteady Subsonic and Transonic Flows.
1983-04-01
3, 4, and 5). We shall use the form given by Randall. The conventional treatment of far field conditions for subsonic flows makes use of analytical...PERTURBATIONS IN A PLANE FLOW FIELD WITH A FREE STREAM MACH NUMBER ONE Figure 2 shows the wave patterns obtained in the linearized treatment of subsonic flows... treatment of the three-dimensional problem is entirely analogous to that of the plane problem. At great distances the flow field generated by a body of finite
Energy Distribution of Electrons in Radiation Induced-Helium Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lo, R. H.
1972-01-01
Energy distribution of high energy electrons as they slow down and thermalize in a gaseous medium is studied. The energy distribution in the entire energy range from source energies down is studied analytically. A helium medium in which primary electrons are created by the passage of heavy-charged particles from nuclear reactions is emphasized. A radiation-induced plasma is of interest in a variety of applications, such as radiation pumped lasers and gaseous core nuclear reactors.
Performance monitoring can boost turboexpander efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntire, R.
1982-07-05
Focuses on the turboexpander/refrigeration system's radial expander and radial compressor. Explains that radial expander efficiency depends on mass flow rate, inlet pressure, inlet temperature, discharge pressure, gas composition, and shaft speed. Discusses quantifying the performance of the separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. Emphasizes antisurge control and modifying Q/N (flow rate/ shaft speed).
A simulation of streaming flows associated with acoustic levitators
NASA Astrophysics Data System (ADS)
Rednikov, A.; Riley, N.
2002-04-01
Steady-state acoustic streaming flow patterns have been observed by Trinh and Robey [Phys. Fluids 6, 3567 (1994)], during the operation of a variety of single axis ultrasonic levitators in a gaseous environment. Microstreaming around levitated samples is superimposed on the streaming flow which is observed in the levitator even in the absence of any particle therein. In this paper, by physical arguments, numerical and analytical simulations we provide entirely satisfactory interpretations of the observed flow patterns in both isothermal and nonisothermal situations.
Big Data Analytics for a Smart Green Infrastructure Strategy
NASA Astrophysics Data System (ADS)
Barrile, Vincenzo; Bonfa, Stefano; Bilotta, Giuliana
2017-08-01
As well known, Big Data is a term for data sets so large or complex that traditional data processing applications aren’t sufficient to process them. The term “Big Data” is referred to using predictive analytics. It is often related to user behavior analytics, or other advanced data analytics methods which from data extract value, and rarely to a particular size of data set. This is especially true for the huge amount of Earth Observation data that satellites constantly orbiting the earth daily transmit.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Kauppinen, Ari; Toiviainen, Maunu; Korhonen, Ossi; Aaltonen, Jaakko; Järvinen, Kristiina; Paaso, Janne; Juuti, Mikko; Ketolainen, Jarkko
2013-02-19
During the past decade, near-infrared (NIR) spectroscopy has been applied for in-line moisture content quantification during a freeze-drying process. However, NIR has been used as a single-vial technique and thus is not representative of the entire batch. This has been considered as one of the main barriers for NIR spectroscopy becoming widely used in process analytical technology (PAT) for freeze-drying. Clearly it would be essential to monitor samples that reliably represent the whole batch. The present study evaluated multipoint NIR spectroscopy for in-line moisture content quantification during a freeze-drying process. Aqueous sucrose solutions were used as model formulations. NIR data was calibrated to predict the moisture content using partial least-squares (PLS) regression with Karl Fischer titration being used as a reference method. PLS calibrations resulted in root-mean-square error of prediction (RMSEP) values lower than 0.13%. Three noncontact, diffuse reflectance NIR probe heads were positioned on the freeze-dryer shelf to measure the moisture content in a noninvasive manner, through the side of the glass vials. The results showed that the detection of unequal sublimation rates within a freeze-dryer shelf was possible with the multipoint NIR system in use. Furthermore, in-line moisture content quantification was reliable especially toward the end of the process. These findings indicate that the use of multipoint NIR spectroscopy can achieve representative quantification of moisture content and hence a drying end point determination to a desired residual moisture level.
Incorporating Learning Analytics in the Classroom
ERIC Educational Resources Information Center
Thille, Candace; Zimmaro, Dawn
2017-01-01
This chapter describes an open learning analytics system focused on learning process measures and designed to engage instructors and students in an evidence-informed decision-making process to improve learning.
Ross, Robert M; Pennycook, Gordon; McKay, Ryan; Gervais, Will M; Langdon, Robyn; Coltheart, Max
2016-07-01
It has been proposed that deluded and delusion-prone individuals gather less evidence before forming beliefs than those who are not deluded or delusion-prone. The primary source of evidence for this "jumping to conclusions" (JTC) bias is provided by research that utilises the "beads task" data-gathering paradigm. However, the cognitive mechanisms subserving data gathering in this task are poorly understood. In the largest published beads task study to date (n = 558), we examined data gathering in the context of influential dual-process theories of reasoning. Analytic cognitive style (the willingness or disposition to critically evaluate outputs from intuitive processing and engage in effortful analytic processing) predicted data gathering in a non-clinical sample, but delusional ideation did not. The relationship between data gathering and analytic cognitive style suggests that dual-process theories of reasoning can contribute to our understanding of the beads task. It is not clear why delusional ideation was not found to be associated with data gathering or analytic cognitive style.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
100-B/C Target Analyte List Development for Soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.W. Ovink
2010-03-18
This report documents the process used to identify source area target analytes in support of the 100-B/C remedial investigation/feasibility study addendum to DOE/RL-2008-46. This report also establishes the analyte exclusion criteria applicable for 100-B/C use and the analytical methods needed to analyze the target analytes.
Understanding Business Analytics
2015-01-05
analytics have been used in organizations for a variety of reasons for quite some time; ranging from the simple (generating and understanding business analytics...process. understanding business analytics 3 How well these two components are orchestrated will determine the level of success an organization has in
Perspectives on bioanalytical mass spectrometry and automation in drug discovery.
Janiszewski, John S; Liston, Theodore E; Cole, Mark J
2008-11-01
The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.
Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P
2013-05-05
Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.
In conflict with ourselves? An investigation of heuristic and analytic processes in decision making.
Bonner, Carissa; Newell, Ben R
2010-03-01
Many theorists propose two types of processing: heuristic and analytic. In conflict tasks, in which these processing types lead to opposing responses, giving the analytic response may require both detection and resolution of the conflict. The ratio bias task, in which people tend to treat larger numbered ratios (e.g., 20/100) as indicating a higher likelihood of winning than do equivalent smaller numbered ratios (e.g., 2/10), is considered to induce such a conflict. Experiment 1 showed response time differences associated with conflict detection, resolution, and the amount of conflict induced. The conflict detection and resolution effects were replicated in Experiment 2 and were not affected by decreasing the influence of the heuristic response or decreasing the capacity to make the analytic response. The results are consistent with dual-process accounts, but a single-process account in which quantitative, rather than qualitative, differences in processing are assumed fares equally well in explaining the data.
Gutknecht, Mandy; Danner, Marion; Schaarschmidt, Marthe-Lisa; Gross, Christian; Augustin, Matthias
2018-02-15
To define treatment benefit, the Patient Benefit Index contains a weighting of patient-relevant treatment goals using the Patient Needs Questionnaire, which includes a 5-point Likert scale ranging from 0 ("not important at all") to 4 ("very important"). These treatment goals have been assigned to five health dimensions. The importance of each dimension can be derived by averaging the importance ratings on the Likert scales of associated treatment goals. As the use of a Likert scale does not allow for a relative assessment of importance, the objective of this study was to estimate relative importance weights for health dimensions and associated treatment goals in patients with psoriasis by using the analytic hierarchy process and to compare these weights with the weights resulting from the Patient Needs Questionnaire. Furthermore, patients' judgments on the difficulty of the methods were investigated. Dimensions of the Patient Benefit Index and their treatment goals were mapped into a hierarchy of criteria and sub-criteria to develop the analytic hierarchy process questionnaire. Adult patients with psoriasis starting a new anti-psoriatic therapy in the outpatient clinic of the Institute for Health Services Research in Dermatology and Nursing at the University Medical Center Hamburg (Germany) were recruited and completed both methods (analytic hierarchy process, Patient Needs Questionnaire). Ratings of treatment goals on the Likert scales (Patient Needs Questionnaire) were summarized within each dimension to assess the importance of the respective health dimension/criterion. Following the analytic hierarchy process approach, consistency in judgments was assessed using a standardized measurement (consistency ratio). At the analytic hierarchy process level of criteria, 78 of 140 patients achieved the accepted consistency. Using the analytic hierarchy process, the dimension "improvement of physical functioning" was most important, followed by "improvement of social functioning". Concerning the Patient Needs Questionnaire results, these dimensions were ranked in second and fifth position, whereas "strengthening of confidence in the therapy and in a possible healing" was ranked most important, which was least important in the analytic hierarchy process ranking. In both methods, "improvement of psychological well-being" and "reduction of impairments due to therapy" were equally ranked in positions three and four. In contrast to this, on the level of sub-criteria, predominantly a similar ranking of treatment goals could be observed between the analytic hierarchy process and the Patient Needs Questionnaire. From the patients' point of view, the Likert scales (Patient Needs Questionnaire) were easier to complete than the analytic hierarchy process pairwise comparisons. Patients with psoriasis assign different importance to health dimensions and associated treatment goals. In choosing a method to assess the importance of health dimensions and/or treatment goals, it needs to be considered that resulting importance weights may differ in dependence on the used method. However, in this study, observed discrepancies in importance weights of the health dimensions were most likely caused by the different methodological approaches focusing on treatment goals to assess the importance of health dimensions on the one hand (Patient Needs Questionnaire) or directly assessing health dimensions on the other hand (analytic hierarchy process).
Enhanced spot preparation for liquid extractive sampling and analysis
Van Berkel, Gary J.; King, Richard C.
2015-09-22
A method for performing surface sampling of an analyte, includes the step of placing the analyte on a stage with a material in molar excess to the analyte, such that analyte-analyte interactions are prevented and the analyte can be solubilized for further analysis. The material can be a matrix material that is mixed with the analyte. The material can be provided on a sample support. The analyte can then be contacted with a solvent to extract the analyte for further processing, such as by electrospray mass spectrometry.
Tarai, Madhumita; Mishra, Ashok Kumar
2016-10-12
The phenomenon of concentration dependent red shift, often observed in synchronous fluorescence spectra (SFS) of monofluorophoric as well as multifluorophoric systems at high chromophore concentrations, is known to have good analytical advantages. This was previously understood in terms of large inner filter effect (IFE) through the introduction of a derived absorption spectral profile that closely corresponds to the SFS profile. Using representative monofluorophoric and multifluorophoric systems, it is now explained how the SF spectral maximum changes with concentration of the fluorophore. For dilute solutions of monofluorophores the maximum is unchanged as expected. It is shown here that the onset of red shift of SFS maximum of both the mono as well as the multifluorophoric systems must occur at the derived absorption spectral parameter value of 0.32 that corresponds to the absorbance value of 0.87. This value is unique irrespective of the nature of the fluorophore under study. For monofluorophoric systems, the wavelength of derived absorption spectral maximum and the wavelength of synchronous fluorescence spectral maximum closely correspond with each other in the entire concentration range. In contrast, for multifluorophoric systems like diesel and aqueous humic acid, large deviations were noted that could be explained as to be due to the presence of non-fluorescing chromophores in the system. This work bridges the entire fluorophore concentration range over which the red shift of SFS maximum sets in; and in the process it establishes the importance of the derived absorption spectral parameter in understanding the phenomenon of concentration dependent red shift of SFS maximum. Copyright © 2016 Elsevier B.V. All rights reserved.
Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A
2006-01-01
The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.
JunoCam Images of Jupiter: A Juno Citizen Science Experiment
NASA Astrophysics Data System (ADS)
Hansen, Candice; Ravine, Michael; Bolton, Scott; Caplinger, Mike; Eichstadt, Gerald; Jensen, Elsa; Momary, Thomas W.; Orton, Glenn S.; Rogers, John
2017-10-01
The Juno mission to Jupiter carries a visible imager on its payload primarily for outreach. The vision of JunoCam’s outreach plan was for the public to participate in, not just observe, a science investigation. Four webpage components were developed for uploading and downloading comments and images, following the steps a traditional imaging team would do: Planning, Discussion, Voting, and Processing, hosted at https://missionjuno.swri.edu/junocam. Lightly processed and raw JunoCam data are posted. JunoCam images through broadband red, green and blue filters and a narrowband methane filter centered at 889 nm mounted directly on the detector. JunoCam is a push-frame imager with a 58 deg wide field of view covering a 1600 pixel width, and builds the second dimension of the image as the spacecraft rotates. This design enables capture of the entire pole of Jupiter in a single image at low emission angle when Juno is ~1 hour from perijove (closest approach). At perijove the wide field of view images are high-resolution while still capturing entire storms, e.g. the Great Red Spot. The public is invited to download JunoCam images, process them, and then upload their products. Over 2000 images have been uploaded to the JunoCam public image gallery. Contributions range from scientific quality to artful whimsy. Artistic works are inspired by Van Gogh and Monet. Works of whimsy include how Jupiter might look through the viewport of the Millennium Falcon, or to an angel perched on a lookout, or through a kaleidoscope. Citizen scientists have also engaged in serious quantitative analysis of the images, mapping images to storms and disruptions of the belts and zones that have been tracked from the earth. They are developing a phase function for Jupiter that allows the images to be flattened from the subsolar point to the terminator, and studying high hazes. Citizen scientists are also developing time-lapse movies, measuring wind flow, tracking circulation patterns in the circumpolar cyclones, and looking for lightning flashes. This effort has engaged the public, with a range of personal interests and considerable artistic and analytic talents. In return, we count our diverse public as partners in this endeavor.
Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology
Brinkworth, Russell S. A.; O'Carroll, David C.
2009-01-01
The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors. PMID:19893631
Clinical laboratory: bigger is not always better.
Plebani, Mario
2018-06-27
Laboratory services around the world are undergoing substantial consolidation and changes through mechanisms ranging from mergers, acquisitions and outsourcing, primarily based on expectations to improve efficiency, increasing volumes and reducing the cost per test. However, the relationship between volume and costs is not linear and numerous variables influence the end cost per test. In particular, the relationship between volumes and costs does not span the entire platter of clinical laboratories: high costs are associated with low volumes up to a threshold of 1 million test per year. Over this threshold, there is no linear association between volumes and costs, as laboratory organization rather than test volume more significantly affects the final costs. Currently, data on laboratory errors and associated diagnostic errors and risk for patient harm emphasize the need for a paradigmatic shift: from a focus on volumes and efficiency to a patient-centered vision restoring the nature of laboratory services as an integral part of the diagnostic and therapy process. Process and outcome quality indicators are effective tools to measure and improve laboratory services, by stimulating a competition based on intra- and extra-analytical performance specifications, intermediate outcomes and customer satisfaction. Rather than competing with economic value, clinical laboratories should adopt a strategy based on a set of harmonized quality indicators and performance specifications, active laboratory stewardship, and improved patient safety.
Laurens, Lieve M L; Van Wychen, Stefanie; McAllister, Jordan P; Arrowsmith, Sarah; Dempster, Thomas A; McGowen, John; Pienkos, Philip T
2014-05-01
Accurate compositional analysis in biofuel feedstocks is imperative; the yields of individual components can define the economics of an entire process. In the nascent industry of algal biofuels and bioproducts, analytical methods that have been deemed acceptable for decades are suddenly critical for commercialization. We tackled the question of how the strain and biochemical makeup of algal cells affect chemical measurements. We selected a set of six procedures (two each for lipids, protein, and carbohydrates): three rapid fingerprinting methods and three advanced chromatography-based methods. All methods were used to measure the composition of 100 samples from three strains: Scenedesmus sp., Chlorella sp., and Nannochloropsis sp. The data presented point not only to species-specific discrepancies but also to cell biochemistry-related discrepancies. There are cases where two respective methods agree but the differences are often significant with over- or underestimation of up to 90%, likely due to chemical interferences with the rapid spectrophotometric measurements. We provide background on the chemistry of interfering reactions for the fingerprinting methods and conclude that for accurate compositional analysis of algae and process and mass balance closure, emphasis should be placed on unambiguous characterization using methods where individual components are measured independently. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Weng, Naidong
2012-11-01
In the pharmaceutical industry, bioanalysis is very dynamic and is probably one of the few fields of research covering the entire drug discovery, development and post-marketing process. Important decisions on drug safety can partially rely on bioanalytical data, which therefore can be subject to regulatory scrutiny. Bioanalytical scientists have historically contributed significant numbers of scientific manuscripts in many peer-reviewed analytical journals. All of these journals provide some high-level instructions, but they also leave sufficient flexibility for reviewers to perform independent critique and offer recommendations for each submitted manuscript. Reviewers play a pivotal role in the process of bioanalytical publication to ensure the publication of high-quality manuscripts in a timely fashion. Their efforts usually lead to improved manuscripts. However, it has to be a joint effort among authors, reviewers and editors to promote scientifically sound and ethically fair bioanalytical publications. Most of the submitted manuscripts were well written with only minor or moderate revisions required for further improvement. Nevertheless, there were small numbers of submitted manuscripts that did not meet the requirements for publications because of scientific or ethical deficiencies, which are discussed in this Letter to the Editor. Copyright © 2012 John Wiley & Sons, Ltd.
New analytical techniques for mycotoxins in complex organic matrices. [Aflatoxins B1, B2, G1, and G2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicking, M.K.L.
1982-07-01
Air samples are collected for analysis from the Ames Solid Waste Recovery System. The high level of airborne fungi within the processing area is of concern due to the possible presence of toxic mycotoxins, and carcinogenic fungal metabolites. An analytical method has been developed to determine the concentration of aflatoxins B1, B2, G1, and G2 in the air of the plant which produces Refuse Derived Fuel (RDF). After extraction with methanol, some components in the matrix are precipitated by dissolving the sample in 30% acetonitrile/chloroform. An aliquot of this solution is injected onto a Styragel column where the sample componentsmore » undergo simultaneous size exclusion and reverse phase partitioning. Additional studies have provided a more thorough understanding of solvent related non-exclusion effects on size exclusion gels. The Styragel column appears to have a useable lifetime of more than six months. After elution from Styragel, the sample is diverted to a second column containing Florisil which has been modified with oxalic acid and deactivated with water. Aflatoxins are eluted with 5% water/acetone. After removal of this solvent, the sample is dissolved in 150 ..mu..L of a spotting solvent and the entire sample applied to a thin layer chromatography (TLC) plate using a unique sample applicator developed here. The aflatoxins on the TLC plate are analyzed by laser fluorescence. A detection limit of 10 pg is possible for aflatoxin standards using a nitrogen laser as the excitation source. Sample concentrations are determined by comparing with an internal standard, a specially synthesized aflatoxin derivative. In two separate RDF samples, aflatoxin B1 was found at levels of 6.5 and 17.0 ppB. The analytical method has also proven useful in the analysis of contaminated corn and peanut meal samples. 42 figures, 8 tables.« less
High-Throughput Incubation and Quantification of Agglutination Assays in a Microfluidic System.
Castro, David; Conchouso, David; Kodzius, Rimantas; Arevalo, Arpys; Foulds, Ian G
2018-06-04
In this paper, we present a two-phase microfluidic system capable of incubating and quantifying microbead-based agglutination assays. The microfluidic system is based on a simple fabrication solution, which requires only laboratory tubing filled with carrier oil, driven by negative pressure using a syringe pump. We provide a user-friendly interface, in which a pipette is used to insert single droplets of a 1.25-µL volume into a system that is continuously running and therefore works entirely on demand without the need for stopping, resetting or washing the system. These assays are incubated by highly efficient passive mixing with a sample-to-answer time of 2.5 min, a 5⁻10-fold improvement over traditional agglutination assays. We study system parameters such as channel length, incubation time and flow speed to select optimal assay conditions, using the streptavidin-biotin interaction as a model analyte quantified using optical image processing. We then investigate the effect of changing the concentration of both analyte and microbead concentrations, with a minimum detection limit of 100 ng/mL. The system can be both low- and high-throughput, depending on the rate at which assays are inserted. In our experiments, we were able to easily produce throughputs of 360 assays per hour by simple manual pipetting, which could be increased even further by automation and parallelization. Agglutination assays are a versatile tool, capable of detecting an ever-growing catalog of infectious diseases, proteins and metabolites. A system such as this one is a step towards being able to produce high-throughput microfluidic diagnostic solutions with widespread adoption. The development of analytical techniques in the microfluidic format, such as the one presented in this work, is an important step in being able to continuously monitor the performance and microfluidic outputs of organ-on-chip devices.
Simple analytical model for low-frequency frequency-modulation noise of monolithic tunable lasers.
Huynh, Tam N; Ó Dúill, Seán P; Nguyen, Lim; Rusch, Leslie A; Barry, Liam P
2014-02-10
We employ simple analytical models to construct the entire frequency-modulation (FM)-noise spectrum of tunable semiconductor lasers. Many contributions to the laser FM noise can be clearly identified from the FM-noise spectrum, such as standard Weiner FM noise incorporating laser relaxation oscillation, excess FM noise due to thermal fluctuations, and carrier-induced refractive index fluctuations from stochastic carrier generation in the passive tuning sections. The contribution of the latter effect is identified by noting a correlation between part of the FM-noise spectrum with the FM-modulation response of the passive sections. We pay particular attention to the case of widely tunable lasers with three independent tuning sections, mainly the sampled-grating distributed Bragg reflector laser, and compare with that of a distributed feedback laser. The theoretical model is confirmed with experimental measurements, with the calculations of the important phase-error variance demonstrating excellent agreement.
Collective effects and dynamics of non-adiabatic flame balls
NASA Astrophysics Data System (ADS)
D'Angelo, Yves; Joulin, Guy
2001-03-01
The dynamics of a homogeneous, polydisperse collection of non-adiabatic flame balls (FBs) is investigated by analytical/numerical means. A strongly temperature-dependent Arrhenius reaction rate is assumed, along with a light enough reactant characterized by a markedly less than unity Lewis number (Le). Combining activation-energy asymptotics with a mean-field type of treatment, the analysis yields a nonlinear integro-differential evolution equation (EE) for the FB population. The EE accounts for heat losses inside each FB and unsteadiness around it, as well as for its interactions with the entire FB population, namely mutual heating and faster (Le<1) consumption of the reactant pool. The initial FB number density and size distribution enter the EE explicitly. The latter is studied analytically at early times, then for small total FB number densities; it is subsequently solved numerically, yielding the whole population evolution and its lifetime. Generalizations and open questions relating to `spotty' turbulent combustion are finally evoked.
Robustness and fragility in coupled oscillator networks under targeted attacks.
Yuan, Tianyu; Aihara, Kazuyuki; Tanaka, Gouhei
2017-01-01
The dynamical tolerance of coupled oscillator networks against local failures is studied. As the fraction of failed oscillator nodes gradually increases, the mean oscillation amplitude in the entire network decreases and then suddenly vanishes at a critical fraction as a phase transition. This critical fraction, widely used as a measure of the network robustness, was analytically derived for random failures but not for targeted attacks so far. Here we derive the general formula for the critical fraction, which can be applied to both random failures and targeted attacks. We consider the effects of targeting oscillator nodes based on their degrees. First we deal with coupled identical oscillators with homogeneous edge weights. Then our theory is applied to networks with heterogeneous edge weights and to those with nonidentical oscillators. The analytical results are validated by numerical experiments. Our results reveal the key factors governing the robustness and fragility of oscillator networks.
Doherty, Brenda; Csáki, Andrea; Thiele, Matthias; Zeisberger, Matthias; Schwuchow, Anka; Kobelke, Jens; Fritzsche, Wolfgang; Schmidt, Markus A
2017-02-01
Detecting small quantities of specific target molecules is of major importance within bioanalytics for efficient disease diagnostics. One promising sensing approach relies on combining plasmonically-active waveguides with microfluidics yielding an easy-to-use sensing platform. Here we introduce suspended-core fibres containing immobilised plasmonic nanoparticles surrounding the guiding core as a concept for an entirely integrated optofluidic platform for efficient refractive index sensing. Due to the extremely small optical core and the large adjacent microfluidic channels, over two orders of magnitude of nanoparticle coverage densities have been accessed with millimetre-long sample lengths showing refractive index sensitivities of 170 nm/RIU for aqueous analytes where the fibre interior is functionalised by gold nanospheres. Our concept represents a fully integrated optofluidic sensing system demanding small sample volumes and allowing for real-time analyte monitoring, both of which are highly relevant within invasive bioanalytics, particularly within molecular disease diagnostics and environmental science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warnecke, Sascha; Toennies, J. Peter, E-mail: jtoenni@gwdg.de; Tang, K. T.
The Tang-Toennies potential for the weakly interacting systems H{sub 2} b{sup 3}Σ{sub u}{sup +}, H–He {sup 2}Σ{sup +}, and He{sub 2} {sup 1}Σ{sub g}{sup +} is extended down to the united atom limit of vanishing internuclear distance. A simple analytic expression connects the united atom limiting potential with the Tang-Toennies potential in the well region. The new potential model is compared with the most recent ab initio calculations for all three systems. The agreement is better than 20% (H{sub 2} and He{sub 2}) or comparable with the differences in the available ab initio calculations (H–He) over six orders of magnitudemore » corresponding to the entire range of internuclear distances.« less
An accurate analytic description of neutrino oscillations in matter
NASA Astrophysics Data System (ADS)
Akhmedov, E. Kh.; Niro, Viviana
2008-12-01
A simple closed-form analytic expression for the probability of two-flavour neutrino oscillations in a matter with an arbitrary density profile is derived. Our formula is based on a perturbative expansion and allows an easy calculation of higher order corrections. The expansion parameter is small when the density changes relatively slowly along the neutrino path and/or neutrino energy is not very close to the Mikheyev-Smirnov-Wolfenstein (MSW) resonance energy. Our approximation is not equivalent to the adiabatic approximation and actually goes beyond it. We demonstrate the validity of our results using a few model density profiles, including the PREM density profile of the Earth. It is shown that by combining the results obtained from the expansions valid below and above the MSW resonance one can obtain a very good description of neutrino oscillations in matter in the entire energy range, including the resonance region.
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
Analytic Steering: Inserting Context into the Information Dialog
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohn, Shawn J.; Calapristi, Augustin J.; Brown, Shyretha D.
2011-10-23
An analyst’s intrinsic domain knowledge is a primary asset in almost any analysis task. Unstructured text analysis systems that apply un-supervised content analysis approaches can be more effective if they can leverage this domain knowledge in a manner that augments the information discovery process without obfuscating new or unexpected content. Current unsupervised approaches rely upon the prowess of the analyst to submit the right queries or observe generalized document and term relationships from ranked or visual results. We propose a new approach which allows the user to control or steer the analytic view within the unsupervised space. This process ismore » controlled through the data characterization process via user supplied context in the form of a collection of key terms. We show that steering with an appropriate choice of key terms can provide better relevance to the analytic domain and still enable the analyst to uncover un-expected relationships; this paper discusses cases where various analytic steering approaches can provide enhanced analysis results and cases where analytic steering can have a negative impact on the analysis process.« less
Clustering in analytical chemistry.
Drab, Klaudia; Daszykowski, Michal
2014-01-01
Data clustering plays an important role in the exploratory analysis of analytical data, and the use of clustering methods has been acknowledged in different fields of science. In this paper, principles of data clustering are presented with a direct focus on clustering of analytical data. The role of the clustering process in the analytical workflow is underlined, and its potential impact on the analytical workflow is emphasized.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Ziv, Yair
2015-01-01
We first demonstrated analytic processing in ASD under conditions in which integral processing seems mandatory in TD observers, a pattern that is often taken to indicate a local default processing in ASD. However, this processing bias does not inevitably come at the price of impaired integration skills. Indeed, examining the same group of…
Equivalent reduced model technique development for nonlinear system dynamic response
NASA Astrophysics Data System (ADS)
Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2013-04-01
The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.
The diminishing role of hubs in dynamical processes on complex networks.
Quax, Rick; Apolloni, Andrea; Sloot, Peter M A
2013-11-06
It is notoriously difficult to predict the behaviour of a complex self-organizing system, where the interactions among dynamical units form a heterogeneous topology. Even if the dynamics of each microscopic unit is known, a real understanding of their contributions to the macroscopic system behaviour is still lacking. Here, we develop information-theoretical methods to distinguish the contribution of each individual unit to the collective out-of-equilibrium dynamics. We show that for a system of units connected by a network of interaction potentials with an arbitrary degree distribution, highly connected units have less impact on the system dynamics when compared with intermediately connected units. In an equilibrium setting, the hubs are often found to dictate the long-term behaviour. However, we find both analytically and experimentally that the instantaneous states of these units have a short-lasting effect on the state trajectory of the entire system. We present qualitative evidence of this phenomenon from empirical findings about a social network of product recommendations, a protein-protein interaction network and a neural network, suggesting that it might indeed be a widespread property in nature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharkov, B. B.; Chizhik, V. I.; Dvinskikh, S. V., E-mail: sergeid@kth.se
2016-01-21
Dipolar recoupling is an essential part of current solid-state NMR methodology for probing atomic-resolution structure and dynamics in solids and soft matter. Recently described magic-echo amplitude- and phase-modulated cross-polarization heteronuclear recoupling strategy aims at efficient and robust recoupling in the entire range of coupling constants both in rigid and highly dynamic molecules. In the present study, the properties of this recoupling technique are investigated by theoretical analysis, spin-dynamics simulation, and experimentally. The resonance conditions and the efficiency of suppressing the rf field errors are examined and compared to those for other recoupling sequences based on similar principles. The experimental datamore » obtained in a variety of rigid and soft solids illustrate the scope of the method and corroborate the results of analytical and numerical calculations. The technique benefits from the dipolar resolution over a wider range of coupling constants compared to that in other state-of-the-art methods and thus is advantageous in studies of complex solids with a broad range of dynamic processes and molecular mobility degrees.« less
Quantitative determinations using portable Raman spectroscopy.
Navin, Chelliah V; Tondepu, Chaitanya; Toth, Roxana; Lawson, Latevi S; Rodriguez, Jason D
2017-03-20
A portable Raman spectrometer was used to develop chemometric models to determine percent (%) drug release and potency for 500mg ciprofloxacin HCl tablets. Parallel dissolution and chromatographic experiments were conducted alongside Raman experiments to assess and compare the performance and capabilities of portable Raman instruments in determining critical drug attributes. All batches tested passed the 30min dissolution specification and the Raman model for drug release was able to essentially reproduce the dissolution profiles obtained by ultraviolet spectroscopy at 276nm for all five batches of the 500mg ciprofloxacin tablets. The five batches of 500mg ciprofloxacin tablets also passed the potency (assay) specification and the % label claim for the entire set of tablets run were nearly identical, 99.4±5.1 for the portable Raman method and 99.2±1.2 for the chromatographic method. The results indicate that portable Raman spectrometers can be used to perform quantitative analysis of critical product attributes of finished drug products. The findings of this study indicate that portable Raman may have applications in the areas of process analytical technology and rapid pharmaceutical surveillance. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Nešić, K.; Stojanović, D.; Baltić, Ž. M.
2017-09-01
Authenticity of food is an issue that is growing in awareness and concern. Although food adulteration has been present since antiquity, it has broadened to include entire global populations as modern food supply chains have expanded, enriched and become more complex. Different forms of adulteration influence not only the quality of food products, but also may cause harmful health effects. Meat and meat products are often subjected to counterfeiting, mislabelling and similar fraudulent activities, while substitutions of meat ingredients with other animal species is one among many forms of food fraud. Feed is also subject to testing for the presence of different animal species, but as part of the eradication process of transmissible spongiform encephalopathies (TSE). In both food and feed cases, the final goal is consumer protection, which should be provided by quick, precise and specific tools. Several analytical tests have been employed for such needs. This paper provides an overview of authentication of meat and meat products compared with species identification in feed control, highlighting the most prevalent laboratory methods.
Photoacoustic detection of CO2 based on LABVIEW at 10.303 μm.
Zhao, Junjuan; Zhao, Zhan; Du, Lidong; Geng, Daoqu; Wu, Shaohua
2011-04-01
A detailed study on a photoacoustic carbon dioxide detection system, through sound card based on virtual instrument, is presented in this paper. In this system, the CO(2) concentration was measured with the non-resonant photoacoustic cell technique through measuring the photoacoustic signal caused by the CO(2). In order to obtain small photoacoustic signals buried in noise, a measurement software was designed with LABVIEW. It has functions of Lock-in Amplifier, digital filter, and signal generator; can also be used to achieve spectrum analysis and signal recovery; has been provided with powerful function for data processing and communication with other measuring instrument. The test results show that the entire system has an outstanding measuring performance with the sensitivity of 10 μv between 10-44 KHz. The non-resonance test of the trace gas analyte CO(2) conducted at 100 Hz demonstrated large signals (15.89 mV) for CO(2) concentrations at 600 ppm and high signal-to-noise values (∼85:1). © 2011 American Institute of Physics
Electro-thermal modelling of anode and cathode in micro-EDM
NASA Astrophysics Data System (ADS)
Yeo, S. H.; Kurnia, W.; Tan, P. C.
2007-04-01
Micro-electrical discharge machining is an evolution of conventional EDM used for fabricating three-dimensional complex micro-components and microstructure with high precision capabilities. However, due to the stochastic nature of the process, it has not been fully understood. This paper proposes an analytical model based on electro-thermal theory to estimate the geometrical dimensions of micro-crater. The model incorporates voltage, current and pulse-on-time during material removal to predict the temperature distribution on the workpiece as a result of single discharges in micro-EDM. It is assumed that the entire superheated area is ejected from the workpiece surface while only a small fraction of the molten area is expelled. For verification purposes, single discharge experiments using RC pulse generator are performed with pure tungsten as the electrode and AISI 4140 alloy steel as the workpiece. For the pulse-on-time range up to 1000 ns, the experimental and theoretical results are found to be in close agreement with average volume approximation errors of 2.7% and 6.6% for the anode and cathode, respectively.
Analytical workflow profiling gene expression in murine macrophages
Nixon, Scott E.; González-Peña, Dianelys; Lawson, Marcus A.; McCusker, Robert H.; Hernandez, Alvaro G.; O’Connor, Jason C.; Dantzer, Robert; Kelley, Keith W.
2015-01-01
Comprehensive and simultaneous analysis of all genes in a biological sample is a capability of RNA-Seq technology. Analysis of the entire transcriptome benefits from summarization of genes at the functional level. As a cellular response of interest not previously explored with RNA-Seq, peritoneal macrophages from mice under two conditions (control and immunologically challenged) were analyzed for gene expression differences. Quantification of individual transcripts modeled RNA-Seq read distribution and uncertainty (using a Beta Negative Binomial distribution), then tested for differential transcript expression (False Discovery Rate-adjusted p-value < 0.05). Enrichment of functional categories utilized the list of differentially expressed genes. A total of 2079 differentially expressed transcripts representing 1884 genes were detected. Enrichment of 92 categories from Gene Ontology Biological Processes and Molecular Functions, and KEGG pathways were grouped into 6 clusters. Clusters included defense and inflammatory response (Enrichment Score = 11.24) and ribosomal activity (Enrichment Score = 17.89). Our work provides a context to the fine detail of individual gene expression differences in murine peritoneal macrophages during immunological challenge with high throughput RNA-Seq. PMID:25708305
A reduced-dimensional model for near-wall transport in cardiovascular flows
Hansen, Kirk B.
2015-01-01
Near-wall mass transport plays an important role in many cardiovascular processes, including the initiation of atherosclerosis, endothelial cell vasoregulation, and thrombogenesis. These problems are characterized by large Péclet and Schmidt numbers as well as a wide range of spatial and temporal scales, all of which impose computational difficulties. In this work, we develop an analytical relationship between the flow field and near-wall mass transport for high-Schmidt-number flows. This allows for the development of a wall-shear-stress-driven transport equation that lies on a codimension-one vessel-wall surface, significantly reducing computational cost in solving the transport problem. Separate versions of this equation are developed for the reaction-rate-limited and transport-limited cases, and numerical results in an idealized abdominal aortic aneurysm are compared to those obtained by solving the full transport equations over the entire domain. The reaction-rate-limited model matches the expected results well. The transport-limited model is accurate in the developed flow regions, but overpredicts wall flux at entry regions and reattachment points in the flow. PMID:26298313
Yang, S A
2002-10-01
This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.
Predictive Analytics to Support Real-Time Management in Pathology Facilities.
Lessard, Lysanne; Michalowski, Wojtek; Chen Li, Wei; Amyot, Daniel; Halwani, Fawaz; Banerjee, Diponkar
2016-01-01
Predictive analytics can provide valuable support to the effective management of pathology facilities. The introduction of new tests and technologies in anatomical pathology will increase the volume of specimens to be processed, as well as the complexity of pathology processes. In order for predictive analytics to address managerial challenges associated with the volume and complexity increases, it is important to pinpoint the areas where pathology managers would most benefit from predictive capabilities. We illustrate common issues in managing pathology facilities with an analysis of the surgical specimen process at the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital, which processes all surgical specimens for the Eastern Ontario Regional Laboratory Association. We then show how predictive analytics could be used to support management. Our proposed approach can be generalized beyond the DPLM, contributing to a more effective management of pathology facilities and in turn to quicker clinical diagnoses.
A Fuzzy-Based Decision Support Model for Selecting the Best Dialyser Flux in Haemodialysis.
Oztürk, Necla; Tozan, Hakan
2015-01-01
Decision making is an important procedure for every organization. The procedure is particularly challenging for complicated multi-criteria problems. Selection of dialyser flux is one of the decisions routinely made for haemodialysis treatment provided for chronic kidney failure patients. This study provides a decision support model for selecting the best dialyser flux between high-flux and low-flux dialyser alternatives. The preferences of decision makers were collected via a questionnaire. A total of 45 questionnaires filled by dialysis physicians and nephrologists were assessed. A hybrid fuzzy-based decision support software that enables the use of Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), Analytic Network Process (ANP), and Fuzzy Analytic Network Process (FANP) was used to evaluate the flux selection model. In conclusion, the results showed that a high-flux dialyser is the best. option for haemodialysis treatment.
Predictive Analytics to Support Real-Time Management in Pathology Facilities
Lessard, Lysanne; Michalowski, Wojtek; Chen Li, Wei; Amyot, Daniel; Halwani, Fawaz; Banerjee, Diponkar
2016-01-01
Predictive analytics can provide valuable support to the effective management of pathology facilities. The introduction of new tests and technologies in anatomical pathology will increase the volume of specimens to be processed, as well as the complexity of pathology processes. In order for predictive analytics to address managerial challenges associated with the volume and complexity increases, it is important to pinpoint the areas where pathology managers would most benefit from predictive capabilities. We illustrate common issues in managing pathology facilities with an analysis of the surgical specimen process at the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital, which processes all surgical specimens for the Eastern Ontario Regional Laboratory Association. We then show how predictive analytics could be used to support management. Our proposed approach can be generalized beyond the DPLM, contributing to a more effective management of pathology facilities and in turn to quicker clinical diagnoses. PMID:28269873
Cima, Robert R; Brown, Michael J; Hebl, James R; Moore, Robin; Rogers, James C; Kollengode, Anantha; Amstutz, Gwendolyn J; Weisbrod, Cheryl A; Narr, Bradly J; Deschamps, Claude
2011-07-01
Operating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency. A multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation. Across 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day. Use of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
GeoMelt{sup R} ICV{sup TM} Treatment of Sellafield Pond Solids Waste - 13414
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witwer, Keith; Woosley, Steve; Campbell, Brett
2013-07-01
Kurion, Inc., in partnership with AMEC Ltd., is demonstrating its GeoMelt{sup R} In-Container Vitrification (ICV){sup TM} Technology to Sellafield Ltd. (SL). SL is evaluating the proposition of directly converting a container (skip/box/drum) of raw solid ILW into an immobilized waste form using thermal treatment, such that the resulting product is suitable for interim storage at Sellafield and subsequent disposal at a future Geological Disposal Facility. Potential SL feed streams include sludges, ion-exchange media, sand, plutonium contaminated material, concrete, uranium, fuel cladding, soils, metals, and decommissioning wastes. The solid wastes have significant proportions of metallic constituents in the form of containers,more » plant equipment, structural material and swarf arising from the nuclear operations at Sellafield. GeoMelt's proprietary ICV process was selected for demonstration, with the focus being high and reactive metal wastes arising from solid ILW material. A composite surrogate recipe was used to demonstrate the technology towards treating waste forms of diverse types and shapes, as well as those considered difficult to process; all the while requiring few (if any) pre-treatment activities. Key strategic objectives, along with their success criterion, were established by SL for this testing, namely: 1. Passivate and stabilize the raw waste simulant, as demonstrated by the entire quantity of material being vitrified, 2. Immobilize the radiological and chemo-toxic species, as demonstrated via indicative mass balance using elemental analyses from an array of samples, 3. Production of an inert and durable product as evidenced by transformation of reactive metals to their inert oxide forms and satisfactory leachability results using PCT testing. Two tests were performed using the GeoMelt Demonstration Unit located at AMEC's Birchwood Park Facilities in the UK. Post-melt examination of the first test indicated some of the waste simulant had not fully processed, due to insufficient processing time and melt temperature. A second test, incorporating operational experience from the first test, was performed and resulted in all of the 138 kg of feed material being treated. The waste simulant portion, at 41 kg, constituted 30 wt% of the total feed mass, with over 90% of this being made up of various reactive and non-reactive metals. The 95 liters of staged material was volume reduced to 41 liters, providing a 57% overall feed to product volume reduction in a fully passivated two-phase glass/metal product. The GeoMelt equipment operated as designed, vitrifying the entire batch of waste simulant. Post-melt analytical testing verified that 91-99+% of the radiological tracer metals were uniformly distributed within the glass/cast refractory/metal product, and the remaining fraction was captured in the offgas filtration systems. PCT testing of the glass and inner refractory liner showed leachability results that outperform the DOE regulatory limit of 2 g/m{sup 2} for the radiological species of interest (Sr, Ru, Cs, Eu, Re), and by more than an order of magnitude better for standard reference analytes (B, Na, Si). (authors)« less
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
The MSCA Program: Developing Analytic Unicorns
ERIC Educational Resources Information Center
Houghton, David M.; Schertzer, Clint; Beck, Scott
2018-01-01
Marketing analytics students who can communicate effectively with decision makers are in high demand. These "analytic unicorns" are hard to find. The Master of Science in Customer Analytics (MSCA) degree program at Xavier University seeks to fill that need. In this paper, we discuss the process of creating the MSCA program. We outline…
Process monitoring and visualization solutions for hot-melt extrusion: a review.
Saerens, Lien; Vervaet, Chris; Remon, Jean Paul; De Beer, Thomas
2014-02-01
Hot-melt extrusion (HME) is applied as a continuous pharmaceutical manufacturing process for the production of a variety of dosage forms and formulations. To ensure the continuity of this process, the quality of the extrudates must be assessed continuously during manufacturing. The objective of this review is to provide an overview and evaluation of the available process analytical techniques which can be applied in hot-melt extrusion. Pharmaceutical extruders are equipped with traditional (univariate) process monitoring tools, observing barrel and die temperatures, throughput, screw speed, torque, drive amperage, melt pressure and melt temperature. The relevance of several spectroscopic process analytical techniques for monitoring and control of pharmaceutical HME has been explored recently. Nevertheless, many other sensors visualizing HME and measuring diverse critical product and process parameters with potential use in pharmaceutical extrusion are available, and were thoroughly studied in polymer extrusion. The implementation of process analytical tools in HME serves two purposes: (1) improving process understanding by monitoring and visualizing the material behaviour and (2) monitoring and analysing critical product and process parameters for process control, allowing to maintain a desired process state and guaranteeing the quality of the end product. This review is the first to provide an evaluation of the process analytical tools applied for pharmaceutical HME monitoring and control, and discusses techniques that have been used in polymer extrusion having potential for monitoring and control of pharmaceutical HME. © 2013 Royal Pharmaceutical Society.
AERIAL SHOWING COMPLETED REMOTE ANALYTICAL FACILITY (CPP627) ADJOINING FUEL PROCESSING ...
AERIAL SHOWING COMPLETED REMOTE ANALYTICAL FACILITY (CPP-627) ADJOINING FUEL PROCESSING BUILDING AND EXCAVATION FOR HOT PILOT PLANT TO RIGHT (CPP-640). INL PHOTO NUMBER NRTS-60-1221. J. Anderson, Photographer, 3/22/1960 - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID
40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...
40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...
Sigma Metrics Across the Total Testing Process.
Charuruks, Navapun
2017-03-01
Laboratory quality control has been developed for several decades to ensure patients' safety, from a statistical quality control focus on the analytical phase to total laboratory processes. The sigma concept provides a convenient way to quantify the number of errors in extra-analytical and analytical phases through the defect per million and sigma metric equation. Participation in a sigma verification program can be a convenient way to monitor analytical performance continuous quality improvement. Improvement of sigma-scale performance has been shown from our data. New tools and techniques for integration are needed. Copyright © 2016 Elsevier Inc. All rights reserved.
Analytical Ultrasonics in Materials Research and Testing
NASA Technical Reports Server (NTRS)
Vary, A.
1986-01-01
Research results in analytical ultrasonics for characterizing structural materials from metals and ceramics to composites are presented. General topics covered by the conference included: status and advances in analytical ultrasonics for characterizing material microstructures and mechanical properties; status and prospects for ultrasonic measurements of microdamage, degradation, and underlying morphological factors; status and problems in precision measurements of frequency-dependent velocity and attenuation for materials analysis; procedures and requirements for automated, digital signal acquisition, processing, analysis, and interpretation; incentives for analytical ultrasonics in materials research and materials processing, testing, and inspection; and examples of progress in ultrasonics for interrelating microstructure, mechanical properites, and dynamic response.
Converting customer expectations into achievable results.
Landis, G A
1999-11-01
It is not enough in today's environment to just meet customers' expectations--we must exceed them. Therefore, one must learn what constitutes expectations. These needs have expanded during the past few years from just manufacturing the product and looking at the outcome from a provincial standpoint. Now we must understand and satisfy the entire supply chain. To manage this process and satisfy the customer, the process now involves the supplier, the manufacturer, and the entire distribution system.
What makes us think? A three-stage dual-process model of analytic engagement.
Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J
2015-08-01
The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. Copyright © 2015 Elsevier Inc. All rights reserved.
Multi-analyte profiling of inflammatory mediators in COPD sputum--the effects of processing.
Pedersen, Frauke; Holz, Olaf; Lauer, Gereon; Quintini, Gianluca; Kiwull-Schöne, Heidrun; Kirsten, Anne-Marie; Magnussen, Helgo; Rabe, Klaus F; Goldmann, Torsten; Watz, Henrik
2015-02-01
Prior to using a new multi-analyte platform for the detection of markers in sputum it is advisable to assess whether sputum processing, especially mucus homogenization by dithiothreitol (DTT), affects the analysis. In this study we tested a novel Human Inflammation Multi Analyte Profiling® Kit (v1.0 Luminex platform; xMAP®). Induced sputum samples of 20 patients with stable COPD (mean FEV1, 59.2% pred.) were processed in parallel using standard processing (with DTT) and a more time consuming sputum dispersion method with phosphate buffered saline (PBS) only. A panel of 47 markers was analyzed in these sputum supernatants by the xMAP®. Twenty-five of 47 analytes have been detected in COPD sputum. Interestingly, 7 markers have been detected in sputum processed with DTT only, or significantly higher levels were observed following DTT treatment (VDBP, α-2-Macroglobulin, haptoglobin, α-1-antitrypsin, VCAM-1, and fibrinogen). However, standard DTT-processing resulted in lower detectable concentrations of ferritin, TIMP-1, MCP-1, MIP-1β, ICAM-1, and complement C3. The correlation between processing methods for the different markers indicates that DTT processing does not introduce a bias by affecting individual sputum samples differently. In conclusion, our data demonstrates that the Luminex-based xMAP® panel can be used for multi-analyte profiling of COPD sputum using the routinely applied method of sputum processing with DTT. However, researchers need to be aware that the absolute concentration of selected inflammatory markers can be affected by DTT. Copyright © 2014 Elsevier Ltd. All rights reserved.
Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.
Corcoran, Timothy C
2018-03-01
In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.
Integrated Results from Analysis of the Rocknest Aeolian Deposit by the Curiosity Rover
NASA Technical Reports Server (NTRS)
Leshin, L. A.; Grotzinger, J. P.; Blake, D. F.; Edgett, K. S.; Gellert, R.; Mahaffy, P. R.; Malin, M. C.; Wiens, R. C.; Treiman, A. H.; Ming, D. W.;
2013-01-01
The Mars Science Laboratory Curiosity rover spent 45 sols (from sol 56-101) at an area called Rocknest (Fig. 1), characterizing local geology and ingesting its aeolian fines into the analytical instruments CheMin and SAM for mineralogical and chemical analysis. Many abstracts at this meeting present the contextual information and detailed data on these first solid samples analyzed in detail by Curiosity at Rocknest. Here, we present an integrated view of the results from Rocknest - the general agreement from discussions among the entire MSL Science Team.
Explicit robust schemes for implementation of general principal value-based constitutive models
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.
Architecture for Business Intelligence in the Healthcare Sector
NASA Astrophysics Data System (ADS)
Lee, Sang Young
2018-03-01
Healthcare environment is growing to include not only the traditional information systems, but also a business intelligence platform. For executive leaders, consultants, and analysts, there is no longer a need to spend hours in design and develop of typical reports or charts, the entire solution can be completed through using Business Intelligence software. The current paper highlights the advantages of big data analytics and business intelligence in the healthcare industry. In this paper, In this paper we focus our discussion around intelligent techniques and methodologies which are recently used for business intelligence in healthcare.
Vela X-1 pulse timing. II - Variations in pulse frequency
NASA Technical Reports Server (NTRS)
Deeter, J. E.; Boynton, P. E.; Lamb, F. K.; Zylstra, G.
1989-01-01
The pulsed X-ray emission of Vela X-1 during May 1978 and December-January 1978-1979 is investigated analytically on the basis of published satellite observations. The data are compiled in tables and graphs and discussed in detail, with reference to data for the entire 1975-1982 period. Variations in pulse frequency are identified on time scales from 2 to 2600 days; the lower nine octaves are characterized as white noise (or random walk in pulse frequency), while the longer-period variations are attributed to changes in neutron-star rotation rates.
Study on finned pipe performance as a ground heat exchanger
NASA Astrophysics Data System (ADS)
Lin, Qinglong; Ma, Jinghui; Shi, Lei
2017-08-01
The GHEs (ground heat exchangers) is an important element that determines the thermal efficiency of the entire ground-source heat-pump system. The aim of the present study is to clarify thermal performance of a new type GHE pipe, which consists straight fins of uniform cross sectional area. In this paper, GHE model is introduced and an analytical model of new type GHE pipe is developed. The heat exchange rate of BHEs utilizing finned pips is 40.42 W/m, which is 16.3% higher than normal BHEs, based on simulation analyses.
ERIC Educational Resources Information Center
Wise, Alyssa Friend; Vytasek, Jovita Maria; Hausknecht, Simone; Zhao, Yuting
2016-01-01
This paper addresses a relatively unexplored area in the field of learning analytics: how analytics are taken up and used as part of teaching and learning processes. Initial steps are taken towards developing design knowledge for this "middle space," with a focus on students as analytics users. First, a core set of challenges for…
Dissociable meta-analytic brain networks contribute to coordinated emotional processing.
Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R
2018-06-01
Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Stevens, Adam R. H.; Brown, Toby
2017-10-01
We combine the latest spectrally stacked data of 21-cm emission from the Arecibo Legacy Fast ALFA survey with an updated version of the Dark Sage semi-analytic model to investigate the relative contributions of secular and environmental astrophysical processes on shaping the H I fractions and quiescence of galaxies in the local Universe. We calibrate the model to match the observed mean H I fraction of all galaxies as a function of stellar mass. Without consideration of stellar feedback, disc instabilities and active galactic nuclei, we show how the slope and normalization of this relation would change significantly. We find Dark Sage can reproduce the relative impact that halo mass is observed to have on satellites' H I fractions and quiescent fraction. However, the model satellites are systematically gas-poor. We discuss how this could be affected by satellite-central cross-contamination from the group-finding algorithm applied to the observed galaxies, but that it is not the full story. From our results, we suggest the anticorrelation between satellites' H I fractions and host halo mass, seen at fixed stellar mass and fixed specific star formation rate, can be attributed almost entirely to ram-pressure stripping of cold gas. Meanwhile, stripping of hot gas from around the satellites drives the correlation of quiescent fraction with halo mass at fixed stellar mass. Further detail in the modelling of galaxy discs' centres is required to solidify this result, however. We contextualize our results with those from other semi-analytic models and hydrodynamic simulations.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
NASA Astrophysics Data System (ADS)
Zolnai, Z.; Toporkov, M.; Volk, J.; Demchenko, D. O.; Okur, S.; Szabó, Z.; Özgür, Ü.; Morkoç, H.; Avrutin, V.; Kótai, E.
2015-02-01
The atomic composition with less than 1-2 atom% uncertainty was measured in ternary BeZnO and quaternary BeMgZnO alloys using a combination of nondestructive Rutherford backscattering spectrometry with 1 MeV He+ analyzing ion beam and non-Rutherford elastic backscattering experiments with 2.53 MeV energy protons. An enhancement factor of 60 in the cross-section of Be for protons has been achieved to monitor Be atomic concentrations. Usually the quantitative analysis of BeZnO and BeMgZnO systems is challenging due to difficulties with appropriate experimental tools for the detection of the light Be element with satisfactory accuracy. As it is shown, our applied ion beam technique, supported with the detailed simulation of ion stopping, backscattering, and detection processes allows of quantitative depth profiling and compositional analysis of wurtzite BeZnO/ZnO/sapphire and BeMgZnO/ZnO/sapphire layer structures with low uncertainty for both Be and Mg. In addition, the excitonic bandgaps of the layers were deduced from optical transmittance measurements. To augment the measured compositions and bandgaps of BeO and MgO co-alloyed ZnO layers, hybrid density functional bandgap calculations were performed with varying the Be and Mg contents. The theoretical vs. experimental bandgaps show linear correlation in the entire bandgap range studied from 3.26 eV to 4.62 eV. The analytical method employed should help facilitate bandgap engineering for potential applications, such as solar blind UV photodetectors and heterostructures for UV emitters and intersubband devices.
Shneidman, Vitaly A
2009-10-28
A typical nucleation-growth process is considered: a system is quenched into a supersaturated state with a small critical radius r( *) (-) and is allowed to nucleate during a finite time interval t(n), after which the supersaturation is abruptly reduced to a fixed value with a larger critical radius r( *) (+). The size-distribution of nucleated particles f(r,t) further evolves due to their deterministic growth and decay for r larger or smaller than r( *) (+), respectively. A general analytic expressions for f(r,t) is obtained, and it is shown that after a large growth time t this distribution approaches an asymptotic shape determined by two dimensionless parameters, lambda related to t(n), and Lambda=r( *) (+)/r( *) (-). This shape is strongly asymmetric with an exponential and double-exponential cutoffs at small and large sizes, respectively, and with a broad near-flat top in case of a long pulse. Conversely, for a short pulse the distribution acquires a distinct maximum at r=r(max)(t) and approaches a universal shape exp[zeta-e(zeta)], with zeta proportional to r-r(max), independent of the pulse duration. General asymptotic predictions are examined in terms of Zeldovich-Frenkel nucleation model where the entire transient behavior can be described in terms of the Lambert W function. Modifications for the Turnbull-Fisher model are also considered, and analytics is compared with exact numerics. Results are expected to have direct implementations in analysis of two-step annealing crystallization experiments, although other applications might be anticipated due to universality of the nucleation pulse technique.
On-Chip Pressure Generation for Driving Liquid Phase Separations in Nanochannels.
Xia, Ling; Choi, Chiwoong; Kothekar, Shrinivas C; Dutta, Debashis
2016-01-05
In this Article, we describe the generation of pressure gradients on-chip for driving liquid phase separations in submicrometer deep channels. The reported pressure-generation capability was realized by applying an electrical voltage across the interface of two glass channel segments with different depths. A mismatch in the electroosmotic flow rate at this junction led to the generation of pressure-driven flow in our device, a fraction of which was then directed to an analysis channel to carry out the desired separation. Experiments showed the reported strategy to be particularly conducive for miniaturization of pressure-driven separations yielding flow velocities in the separation channel that were nearly unaffected upon scaling down the depth of the entire fluidic network. Moreover, the small dead volume in our system allowed for high dynamic control over this pressure gradient, which otherwise was challenging to accomplish during the sample injection process using external pumps. Pressure-driven velocities up to 3.1 mm/s were realized in separation ducts as shallow as 300 nm using our current design for a maximum applied voltage of 3 kV. The functionality of this integrated device was demonstrated by implementing a pressure-driven ion chromatographic analysis that relied on analyte interaction with the nanochannel surface charges to yield a nonuniform solute concentration across the channel depth. Upon coupling such analyte distribution to the parabolic pressure-driven flow profile in the separation duct, a mixture of amino acids could be resolved. The reported assay yielded a higher separation resolution compared to its electrically driven counterpart in which sample migration was realized using electroosmosis/electrophoresis.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Ozarda, Yesim; Ichihara, Kiyoshi; Aslan, Diler; Aybek, Hulya; Ari, Zeki; Taneli, Fatma; Coker, Canan; Akan, Pinar; Sisman, Ali Riza; Bahceci, Onur; Sezgin, Nurzen; Demir, Meltem; Yucel, Gultekin; Akbas, Halide; Ozdem, Sebahat; Polat, Gurbuz; Erbagci, Ayse Binnur; Orkmez, Mustafa; Mete, Nuriye; Evliyaoglu, Osman; Kiyici, Aysel; Vatansev, Husamettin; Ozturk, Bahadir; Yucel, Dogan; Kayaalp, Damla; Dogan, Kubra; Pinar, Asli; Gurbilek, Mehmet; Cetinkaya, Cigdem Damla; Akin, Okhan; Serdar, Muhittin; Kurt, Ismail; Erdinc, Selda; Kadicesme, Ozgur; Ilhan, Necip; Atali, Dilek Sadak; Bakan, Ebubekir; Polat, Harun; Noyan, Tevfik; Can, Murat; Bedir, Abdulkerim; Okuyucu, Ali; Deger, Orhan; Agac, Suret; Ademoglu, Evin; Kaya, Ayşem; Nogay, Turkan; Eren, Nezaket; Dirican, Melahat; Tuncer, GulOzlem; Aykus, Mehmet; Gunes, Yeliz; Ozmen, Sevda Unalli; Kawano, Reo; Tezcan, Sehavet; Demirpence, Ozlem; Degirmen, Elif
2014-12-01
A nationwide multicenter study was organized to establish reference intervals (RIs) in the Turkish population for 25 commonly tested biochemical analytes and to explore sources of variation in reference values, including regionality. Blood samples were collected nationwide in 28 laboratories from the seven regions (≥400 samples/region, 3066 in all). The sera were collectively analyzed in Uludag University in Bursa using Abbott reagents and analyzer. Reference materials were used for standardization of test results. After secondary exclusion using the latent abnormal values exclusion method, RIs were derived by a parametric method employing the modified Box-Cox formula and compared with the RIs by the non-parametric method. Three-level nested ANOVA was used to evaluate variations among sexes, ages and regions. Associations between test results and age, body mass index (BMI) and region were determined by multiple regression analysis (MRA). By ANOVA, differences of reference values among seven regions were significant in none of the 25 analytes. Significant sex-related and age-related differences were observed for 10 and seven analytes, respectively. MRA revealed BMI-related changes in results for uric acid, glucose, triglycerides, high-density lipoprotein (HDL)-cholesterol, alanine aminotransferase, and γ-glutamyltransferase. Their RIs were thus derived by applying stricter criteria excluding individuals with BMI >28 kg/m2. Ranges of RIs by non-parametric method were wider than those by parametric method especially for those analytes affected by BMI. With the lack of regional differences and the well-standardized status of test results, the RIs derived from this nationwide study can be used for the entire Turkish population.
Preijers, Frank W M B; van der Velden, Vincent H J; Preijers, Tim; Brooimans, Rik A; Marijt, Erik; Homburg, Christa; van Montfort, Kees; Gratama, Jan W
2016-05-01
In 1985, external quality assurance was initiated in the Netherlands to reduce the between-laboratory variability of leukemia/lymphoma immunophenotyping and to improve diagnostic conclusions. This program consisted of regular distributions of test samples followed by biannual plenary participant meetings in which results were presented and discussed. A scoring system was developed in which the quality of results was rated by systematically reviewing the pre-analytical, analytical, and post-analytical assay stages using three scores, i.e., correct (A), minor fault (B), and major fault (C). Here, we report on 90 consecutive samples distributed to 40-61 participating laboratories between 1998 and 2012. Most samples contained >20% aberrant cells, mainly selected from mature lymphoid malignancies (B or T cell) and acute leukemias (myeloid or lymphoblastic). In 2002, minimally required monoclonal antibody (mAb) panels were introduced, whilst methodological guidelines for all three assay stages were implemented. Retrospectively, we divided the study into subsequent periods of 4 ("initial"), 4 ("learning"), and 7 years ("consolidation") to detect "learning effects." Uni- and multivariate models showed that analytical performance declined since 2002, but that post-analytical performance improved during the entire period. These results emphasized the need to improve technical aspects of the assay, and reflected improved interpretational skills of the participants. A strong effect of participant affiliation in all three assay stages was observed: laboratories in academic and large peripheral hospitals performed significantly better than those in small hospitals. © 2015 International Clinical Cytometry Society. © 2015 International Clinical Cytometry Society.
Cardaci, Maurizio; Misuraca, Raffaella
2005-08-01
This paper raises some methodological problems in the dual process explanation provided by Wada and Nittono for their 2004 results using the Wason selection task. We maintain that the Nittono rethinking approach is weak and that it should be refined to grasp better the evidence of analytic processes.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122
Scandurra, Isabella; Hägglund, Maria
2009-01-01
Introduction Integrated care involves different professionals, belonging to different care provider organizations and requires immediate and ubiquitous access to patient-oriented information, supporting an integrated view on the care process [1]. Purpose To present a method for development of usable and work process-oriented information and communication technology (ICT) systems for integrated care. Theory and method Based on Human-computer Interaction Science and in particular Participatory Design [2], we present a new collaborative design method in the context of health information systems (HIS) development [3]. This method implies a thorough analysis of the entire interdisciplinary cooperative work and a transformation of the results into technical specifications, via user validated scenarios, prototypes and use cases, ultimately leading to the development of appropriate ICT for the variety of occurring work situations for different user groups, or professions, in integrated care. Results and conclusions Application of the method in homecare of the elderly resulted in an HIS that was well adapted to the intended user groups. Conducted in multi-disciplinary seminars, the method captured and validated user needs and system requirements for different professionals, work situations, and environments not only for current work; it also aimed to improve collaboration in future (ICT supported) work processes. A holistic view of the entire care process was obtained and supported through different views of the HIS for different user groups, resulting in improved work in the entire care process as well as for each collaborating profession [4].
Buonfiglio, Marzia; Toscano, M; Puledda, F; Avanzini, G; Di Clemente, L; Di Sabato, F; Di Piero, V
2015-03-01
Habituation is considered one of the most basic mechanisms of learning. Habituation deficit to several sensory stimulations has been defined as a trait of migraine brain and also observed in other disorders. On the other hand, analytic information processing style is characterized by the habit of continually evaluating stimuli and it has been associated with migraine. We investigated a possible correlation between lack of habituation of evoked visual potentials and analytic cognitive style in healthy subjects. According to Sternberg-Wagner self-assessment inventory, 15 healthy volunteers (HV) with high analytic score and 15 HV with high global score were recruited. Both groups underwent visual evoked potentials recordings after psychological evaluation. We observed significant lack of habituation in analytical individuals compared to global group. In conclusion, a reduced habituation of visual evoked potentials has been observed in analytic subjects. Our results suggest that further research should be undertaken regarding the relationship between analytic cognitive style and lack of habituation in both physiological and pathophysiological conditions.
How Do Gut Feelings Feature in Tutorial Dialogues on Diagnostic Reasoning in GP Traineeship?
ERIC Educational Resources Information Center
Stolper, C. F.; Van de Wiel, M. W. J.; Hendriks, R. H. M.; Van Royen, P.; Van Bokhoven, M. A.; Van der Weijden, T.; Dinant, G. J.
2015-01-01
Diagnostic reasoning is considered to be based on the interaction between analytical and non-analytical cognitive processes. Gut feelings, a specific form of non-analytical reasoning, play a substantial role in diagnostic reasoning by general practitioners (GPs) and may activate analytical reasoning. In GP traineeships in the Netherlands, trainees…
UV-Vis as quantification tool for solubilized lignin following a single-shot steam process.
Lee, Roland A; Bédard, Charles; Berberi, Véronique; Beauchet, Romain; Lavoie, Jean-Michel
2013-09-01
In this short communication, UV/Vis was used as an analytical tool for the quantification of lignin concentrations in aqueous mediums. A significant correlation was determined between absorbance and concentration of lignin in solution. For this study, lignin was produced from different types of biomasses (willow, aspen, softwood, canary grass and hemp) using steam processes. Quantification was performed at 212, 225, 237, 270, 280 and 287 nm. UV-Vis quantification of lignin was found suitable for different types of biomass making this a timesaving analytical system that could lead to uses as Process Analytical Tool (PAT) in biorefineries utilizing steam processes or comparable approaches. Copyright © 2013 Elsevier Ltd. All rights reserved.
Analytic information processing style in epilepsy patients.
Buonfiglio, Marzia; Di Sabato, Francesco; Mandillo, Silvia; Albini, Mariarita; Di Bonaventura, Carlo; Giallonardo, Annateresa; Avanzini, Giuliano
2017-08-01
Relevant to the study of epileptogenesis is learning processing, given the pivotal role that neuroplasticity assumes in both mechanisms. Recently, evoked potential analyses showed a link between analytic cognitive style and altered neural excitability in both migraine and healthy subjects, regardless of cognitive impairment or psychological disorders. In this study we evaluated analytic/global and visual/auditory perceptual dimensions of cognitive style in patients with epilepsy. Twenty-five cryptogenic temporal lobe epilepsy (TLE) patients matched with 25 idiopathic generalized epilepsy (IGE) sufferers and 25 healthy volunteers were recruited and participated in three cognitive style tests: "Sternberg-Wagner Self-Assessment Inventory", the C. Cornoldi test series called AMOS, and the Mariani Learning style Questionnaire. Our results demonstrate a significant association between analytic cognitive style and both IGE and TLE and respectively a predominant auditory and visual analytic style (ANOVA: p values <0,0001). These findings should encourage further research to investigate information processing style and its neurophysiological correlates in epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-05-04
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry.
Rajabi, Mohamadreza; Mansourian, Ali; Bazmani, Ahad
2012-11-01
Visceral leishmaniasis (VL) is a vector-borne disease, highly influenced by environmental factors, which is an increasing public health problem in Iran, especially in the north-western part of the country. A geographical information system was used to extract data and map environmental variables for all villages in the districts of Kalaybar and Ahar in the province of East Azerbaijan. An attempt to predict VL prevalence based on an analytical hierarchy process (AHP) module combined with ordered weighted averaging (OWA) with fuzzy quantifiers indicated that the south-eastern part of Ahar is particularly prone to high VL prevalence. With the main objective to locate the villages most at risk, the opinions of experts and specialists were generalised into a group decision-making process by means of fuzzy weighting methods and induced OWA. The prediction model was applied throughout the entire study area (even where the disease is prevalent and where data already exist). The predicted data were compared with registered VL incidence records in each area. The results suggest that linguistic fuzzy quantifiers, guided by an AHP-OWA model, are capable of predicting susceptive locations for VL prevalence with an accuracy exceeding 80%. The group decision-making process demonstrated that people in 15 villages live under particularly high risk for VL contagion, i.e. villages where the disease is highly prevalent. The findings of this study are relevant for the planning of effective control strategies for VL in northwest Iran.
Trapping in scale-free networks with hierarchical organization of modularity.
Zhang, Zhongzhi; Lin, Yuan; Gao, Shuyang; Zhou, Shuigeng; Guan, Jihong; Li, Mo
2009-11-01
A wide variety of real-life networks share two remarkable generic topological properties: scale-free behavior and modular organization, and it is natural and important to study how these two features affect the dynamical processes taking place on such networks. In this paper, we investigate a simple stochastic process--trapping problem, a random walk with a perfect trap fixed at a given location, performed on a family of hierarchical networks that exhibit simultaneously striking scale-free and modular structure. We focus on a particular case with the immobile trap positioned at the hub node having the largest degree. Using a method based on generating functions, we determine explicitly the mean first-passage time (MFPT) for the trapping problem, which is the mean of the node-to-trap first-passage time over the entire network. The exact expression for the MFPT is calculated through the recurrence relations derived from the special construction of the hierarchical networks. The obtained rigorous formula corroborated by extensive direct numerical calculations exhibits that the MFPT grows algebraically with the network order. Concretely, the MFPT increases as a power-law function of the number of nodes with the exponent much less than 1. We demonstrate that the hierarchical networks under consideration have more efficient structure for transport by diffusion in contrast with other analytically soluble media including some previously studied scale-free networks. We argue that the scale-free and modular topologies are responsible for the high efficiency of the trapping process on the hierarchical networks.
BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark
Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung
2016-01-01
Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today’s data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG’s simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact. PMID:27390389
Entire Photodamaged Chloroplasts Are Transported to the Central Vacuole by Autophagy[OPEN
2017-01-01
Turnover of dysfunctional organelles is vital to maintain homeostasis in eukaryotic cells. As photosynthetic organelles, plant chloroplasts can suffer sunlight-induced damage. However, the process for turnover of entire damaged chloroplasts remains unclear. Here, we demonstrate that autophagy is responsible for the elimination of sunlight-damaged, collapsed chloroplasts in Arabidopsis thaliana. We found that vacuolar transport of entire chloroplasts, termed chlorophagy, was induced by UV-B damage to the chloroplast apparatus. This transport did not occur in autophagy-defective atg mutants, which exhibited UV-B-sensitive phenotypes and accumulated collapsed chloroplasts. Use of a fluorescent protein marker of the autophagosomal membrane allowed us to image autophagosome-mediated transport of entire chloroplasts to the central vacuole. In contrast to sugar starvation, which preferentially induced distinct type of chloroplast-targeted autophagy that transports a part of stroma via the Rubisco-containing body (RCB) pathway, photooxidative damage induced chlorophagy without prior activation of RCB production. We further showed that chlorophagy is induced by chloroplast damage caused by either artificial visible light or natural sunlight. Thus, this report establishes that an autophagic process eliminates entire chloroplasts in response to light-induced damage. PMID:28123106
Montoro, Paola; Maldini, Mariateresa; Luciani, Leonilda; Tuberoso, Carlo I G; Congiu, Francesca; Pizza, Cosimo
2012-08-01
Radical scavenging activities of Crocus sativus petals, stamens and entire flowers, which are waste products in the production of the spice saffron, by employing ABTS radical scavenging method, were determined. At the same time, the metabolic profiles of different extract (obtained by petals, stamens and flowers) were obtained by LC-ESI-IT MS (liquid chromatography coupled with electrospray mass spectrometry equipped with Ion Trap analyser). LC-ESI-MS is a techniques largely used nowadays for qualitative fingerprint of herbal extracts and particularly for phenolic compounds. To compare the different extracts under an analytical point of view a specific method for qualitative LC-MS analysis was developed. The high variety of glycosylated flavonoids found in the metabolic profiles could give value to C. sativus petals, stamens and entire flowers. Waste products obtained during saffron production, could represent an interesting source of phenolic compounds, with respect to the high variety of compounds and their free radical scavenging activity. © 2012 Institute of Food Technologists®
Batch Immunostaining for Large-Scale Protein Detection in the Whole Monkey Brain
Zangenehpour, Shahin; Burke, Mark W.; Chaudhuri, Avi; Ptito, Maurice
2009-01-01
Immunohistochemistry (IHC) is one of the most widely used laboratory techniques for the detection of target proteins in situ. Questions concerning the expression pattern of a target protein across the entire brain are relatively easy to answer when using IHC in small brains, such as those of rodents. However, answering the same questions in large and convoluted brains, such as those of primates presents a number of challenges. Here we present a systematic approach for immunodetection of target proteins in an adult monkey brain. This approach relies on the tissue embedding and sectioning methodology of NeuroScience Associates (NSA) as well as tools developed specifically for batch-staining of free-floating sections. It results in uniform staining of a set of sections which, at a particular interval, represents the entire brain. The resulting stained sections can be subjected to a wide variety of analytical procedures in order to measure protein levels, the population of neurons expressing a certain protein. PMID:19636291
The problem of self-disclosure in psychoanalysis.
Meissner, W W
2002-01-01
The problem of self-disclosure is explored in relation to currently shifting paradigms of the nature of the analytic relation and analytic interaction. Relational and intersubjective perspectives emphasize the role of self-disclosure as not merely allowable, but as an essential facilitating aspect of the analytic dialogue, in keeping with the role of the analyst as a contributing partner in the process. At the opposite extreme, advocates of classical anonymity stress the importance of neutrality and abstinence. The paper seeks to chart a course between unconstrained self-disclosure and absolute anonymity, both of which foster misalliances. Self-disclosure is seen as at times contributory to the analytic process, and at times deleterious. The decision whether to self-disclose, what to disclose, and when and how, should be guided by the analyst's perspective on neutrality, conceived as a mental stance in which the analyst assesses and decides what, at any given point, seems to contribute to the analytic process and the patient's therapeutic benefit. The major risk in self-disclosure is the tendency to draw the analytic interaction into the real relation between analyst and patient, thus diminishing or distorting the therapeutic alliance, mitigating transference expression, and compromising therapeutic effectiveness.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
Kling, Maximilian; Seyring, Nicole; Tzanova, Polia
2016-09-01
Economic instruments provide significant potential for countries with low municipal waste management performance in decreasing landfill rates and increasing recycling rates for municipal waste. In this research, strengths and weaknesses of landfill tax, pay-as-you-throw charging systems, deposit-refund systems and extended producer responsibility schemes are compared, focusing on conditions in countries with low waste management performance. In order to prioritise instruments for implementation in these countries, the analytic hierarchy process is applied using results of a literature review as input for the comparison. The assessment reveals that pay-as-you-throw is the most preferable instrument when utility-related criteria are regarded (wb = 0.35; analytic hierarchy process distributive mode; absolute comparison) mainly owing to its waste prevention effect, closely followed by landfill tax (wb = 0.32). Deposit-refund systems (wb = 0.17) and extended producer responsibility (wb = 0.16) rank third and fourth, with marginal differences owing to their similar nature. When cost-related criteria are additionally included in the comparison, landfill tax seems to provide the highest utility-cost ratio. Data from literature concerning cost (contrary to utility-related criteria) is currently not sufficiently available for a robust ranking according to the utility-cost ratio. In general, the analytic hierarchy process is seen as a suitable method for assessing economic instruments in waste management. Independent from the chosen analytic hierarchy process mode, results provide valuable indications for policy-makers on the application of economic instruments, as well as on their specific strengths and weaknesses. Nevertheless, the instruments need to be put in the country-specific context along with the results of this analytic hierarchy process application before practical decisions are made. © The Author(s) 2016.
Insight solutions are correct more often than analytic solutions
Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark
2016-01-01
How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960
Sirichai, S; de Mello, A J
2001-01-01
The separation and detection of both print and film developing agents (CD-3 and CD-4) in photographic processing solutions using chip-based capillary electrophoresis is presented. For simultaneous detection of both analytes under identical experimental conditions a buffer pH of 11.9 is used to partially ionise the analytes. Detection is made possible by indirect fluorescence, where the ions of the analytes displace the anionic fluorescing buffer ion to create negative peaks. Under optimal conditions, both analytes can be analyzed within 30 s. The limits of detection for CD-3 and CD-4 are 0.17 mM and 0.39 mM, respectively. The applicability of the method for the analysis of seasoned photographic processing developer solutions is also examined.
The heuristic-analytic theory of reasoning: extension and evaluation.
Evans, Jonathan St B T
2006-06-01
An extensively revised heuristic-analytic theory of reasoning is presented incorporating three principles of hypothetical thinking. The theory assumes that reasoning and judgment are facilitated by the formation of epistemic mental models that are generated one at a time (singularity principle) by preconscious heuristic processes that contextualize problems in such a way as to maximize relevance to current goals (relevance principle). Analytic processes evaluate these models but tend to accept them unless there is good reason to reject them (satisficing principle). At a minimum, analytic processing of models is required so as to generate inferences or judgments relevant to the task instructions, but more active intervention may result in modification or replacement of default models generated by the heuristic system. Evidence for this theory is provided by a review of a wide range of literature on thinking and reasoning.
Esmonde-White, Karen A; Cuellar, Maryann; Uerpmann, Carsten; Lenain, Bruno; Lewis, Ian R
2017-01-01
Adoption of Quality by Design (QbD) principles, regulatory support of QbD, process analytical technology (PAT), and continuous manufacturing are major factors effecting new approaches to pharmaceutical manufacturing and bioprocessing. In this review, we highlight new technology developments, data analysis models, and applications of Raman spectroscopy, which have expanded the scope of Raman spectroscopy as a process analytical technology. Emerging technologies such as transmission and enhanced reflection Raman, and new approaches to using available technologies, expand the scope of Raman spectroscopy in pharmaceutical manufacturing, and now Raman spectroscopy is successfully integrated into real-time release testing, continuous manufacturing, and statistical process control. Since the last major review of Raman as a pharmaceutical PAT in 2010, many new Raman applications in bioprocessing have emerged. Exciting reports of in situ Raman spectroscopy in bioprocesses complement a growing scientific field of biological and biomedical Raman spectroscopy. Raman spectroscopy has made a positive impact as a process analytical and control tool for pharmaceutical manufacturing and bioprocessing, with demonstrated scientific and financial benefits throughout a product's lifecycle.
INTEGRATED ENVIRONMENTAL ASSESSMENT OF THE MID-ATLANTIC REGION WITH ANALYTICAL NETWORK PROCESS
A decision analysis method for integrating environmental indicators was developed. This was a combination of Principal Component Analysis (PCA) and the Analytic Network Process (ANP). Being able to take into account interdependency among variables, the method was capable of ran...
Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene
2018-05-16
Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.
7 CFR 51.3416 - Classification of defects.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Maximum allowed for U.S. No. 2 processing Occurring outside of or not entirely confined to the vascular ring Internal Black Spot, Internal Discoloration, Vascular Browning, Fusarium Wilt, Net Necrosis, Other Necrosis, Stem End Browning 5% waste 10% waste. Occurring entirely within the vascular ring Hollow Heart or...
1991-09-01
iv III. THE ANALYTIC HIERARCHY PROCESS ..... ........ 15 A. INTRODUCTION ...... ................. 15 B. THE AHP PROCESS ...... ................ 16 C...INTRODUCTION ...... ................. 26 B. IMPLEMENTATION OF CERTS USING AHP ........ .. 27 1. Consistency ...... ................ 29 2. User Interface...the proposed technique into a Decision Support System. Expert Choice implements the Analytic Hierarchy Process ( AHP ), an approach to multi- criteria
ERIC Educational Resources Information Center
Fisher, James E.; Sealey, Ronald W.
The study describes the analytical pragmatic structure of concepts and applies this structure to the legal concept of procedural due process. This structure consists of form, purpose, content, and function. The study conclusions indicate that the structure of the concept of procedural due process, or any legal concept, is not the same as the…
Heuristic and analytic processing in online sports betting.
d'Astous, Alain; Di Gaspero, Marc
2015-06-01
This article presents the results of two studies that examine the occurrence of heuristic (i.e., intuitive and fast) and analytic (i.e., deliberate and slow) processes among people who engage in online sports betting on a regular basis. The first study was qualitative and was conducted with a convenience sample of 12 regular online sports gamblers who described the processes by which they arrive at a sports betting decision. The results of this study showed that betting online on sports events involves a mix of heuristic and analytic processes. The second study consisted in a survey of 161 online sports gamblers where performance in terms of monetary gains, experience in online sports betting, propensity to collect and analyze relevant information prior to betting, and use of bookmaker odds were measured. This study showed that heuristic and analytic processes act as mediators of the relationship between experience and performance. The findings stemming of these two studies give some insights into gamblers' modes of thinking and behaviors in an online sports betting context and show the value of the dual mediation process model for research that looks at gambling activities from a judgment and decision making perspective.
NASA Astrophysics Data System (ADS)
Beckmann, R. S.; Slyz, A.; Devriendt, J.
2018-07-01
Whilst in galaxy-size simulations, supermassive black holes (SMBHs) are entirely handled by sub-grid algorithms, computational power now allows the accretion radius of such objects to be resolved in smaller scale simulations. In this paper, we investigate the impact of resolution on two commonly used SMBH sub-grid algorithms; the Bondi-Hoyle-Lyttleton (BHL) formula for accretion on to a point mass, and the related estimate of the drag force exerted on to a point mass by a gaseous medium. We find that when the accretion region around the black hole scales with resolution, and the BHL formula is evaluated using local mass-averaged quantities, the accretion algorithm smoothly transitions from the analytic BHL formula (at low resolution) to a supply-limited accretion scheme (at high resolution). However, when a similar procedure is employed to estimate the drag force, it can lead to significant errors in its magnitude, and/or apply this force in the wrong direction in highly resolved simulations. At high Mach numbers and for small accretors, we also find evidence of the advective-acoustic instability operating in the adiabatic case, and of an instability developing around the wake's stagnation point in the quasi-isothermal case. Moreover, at very high resolution, and Mach numbers above M_∞ ≥ 3, the flow behind the accretion bow shock becomes entirely dominated by these instabilities. As a result, accretion rates on to the black hole drop by about an order of magnitude in the adiabatic case, compared to the analytic BHL formula.
NASA Astrophysics Data System (ADS)
Beckmann, R. S.; Slyz, A.; Devriendt, J.
2018-04-01
Whilst in galaxy-size simulations, supermassive black holes (SMBH) are entirely handled by sub-grid algorithms, computational power now allows the accretion radius of such objects to be resolved in smaller scale simulations. In this paper, we investigate the impact of resolution on two commonly used SMBH sub-grid algorithms; the Bondi-Hoyle-Lyttleton (BHL) formula for accretion onto a point mass, and the related estimate of the drag force exerted onto a point mass by a gaseous medium. We find that when the accretion region around the black hole scales with resolution, and the BHL formula is evaluated using local mass-averaged quantities, the accretion algorithm smoothly transitions from the analytic BHL formula (at low resolution) to a supply limited accretion (SLA) scheme (at high resolution). However, when a similar procedure is employed to estimate the drag force it can lead to significant errors in its magnitude, and/or apply this force in the wrong direction in highly resolved simulations. At high Mach numbers and for small accretors, we also find evidence of the advective-acoustic instability operating in the adiabatic case, and of an instability developing around the wake's stagnation point in the quasi-isothermal case. Moreover, at very high resolution, and Mach numbers above M_∞ ≥ 3, the flow behind the accretion bow shock becomes entirely dominated by these instabilities. As a result, accretion rates onto the black hole drop by about an order of magnitude in the adiabatic case, compared to the analytic BHL formula.
Contains basic information on the role and origins of the Selected Analytical Methods including the formation of the Homeland Security Laboratory Capacity Work Group and the Environmental Evaluation Analytical Process Roadmap for Homeland Security Events
Theoretical study of optical pump process in solid gain medium based on four-energy-level model
NASA Astrophysics Data System (ADS)
Ma, Yongjun; Fan, Zhongwei; Zhang, Bin; Yu, Jin; Zhang, Hongbo
2018-04-01
A semiclassical algorithm is explored to a four-energy level model, aiming to find out the factors that affect the dynamics behavior during the pump process. The impacts of pump intensity Ω p , non-radiative transition rate γ 43 and decay rate of electric dipole δ 14 are discussed in detail. The calculation results show that large γ 43, small δ 14, and strong pumping Ω p are beneficial to the establishing of population inversion. Under strong pumping conditions, the entire pump process can be divided into four different phases, tentatively named far-from-equilibrium process, Rabi oscillation process, quasi dynamic equilibrium process and ‘equilibrium’ process. The Rabi oscillation can slow the pumping process and cause some instability. Moreover, the duration of the entire process is negatively related to Ω p and γ 43 whereas positively related to δ 14.
NASA Technical Reports Server (NTRS)
Sauer, Richard L. (Inventor); Akse, James R. (Inventor); Thompson, John O. (Inventor); Atwater, James E. (Inventor)
1999-01-01
Ammonia monitor and method of use are disclosed. A continuous, real-time determination of the concentration of ammonia in an aqueous process stream is possible over a wide dynamic range of concentrations. No reagents are required because pH is controlled by an in-line solid-phase base. Ammonia is selectively transported across a membrane from the process stream to an analytical stream to an analytical stream under pH control. The specific electrical conductance of the analytical stream is measured and used to determine the concentration of ammonia.
De Beer, T R M; Vercruysse, P; Burggraeve, A; Quinten, T; Ouyang, J; Zhang, X; Vervaet, C; Remon, J P; Baeyens, W R G
2009-09-01
The aim of the present study was to examine the complementary properties of Raman and near infrared (NIR) spectroscopy as PAT tools for the fast, noninvasive, nondestructive and in-line process monitoring of a freeze drying process. Therefore, Raman and NIR probes were built in the freeze dryer chamber, allowing simultaneous process monitoring. A 5% (w/v) mannitol solution was used as model for freeze drying. Raman and NIR spectra were continuously collected during freeze drying (one Raman and NIR spectrum/min) and the spectra were analyzed using principal component analysis (PCA) and multivariate curve resolution (MCR). Raman spectroscopy was able to supply information about (i) the mannitol solid state throughout the entire process, (ii) the endpoint of freezing (endpoint of mannitol crystallization), and (iii) several physical and chemical phenomena occurring during the process (onset of ice nucleation, onset of mannitol crystallization). NIR spectroscopy proved to be a more sensitive tool to monitor the critical aspects during drying: (i) endpoint of ice sublimation and (ii) monitoring the release of hydrate water during storage. Furthermore, via NIR spectroscopy some Raman observations were confirmed: start of ice nucleation, end of mannitol crystallization and solid state characteristics of the end product. When Raman and NIR monitoring were performed on the same vial, the Raman signal was saturated during the freezing step caused by reflected NIR light reaching the Raman detector. Therefore, NIR and Raman measurements were done on a different vial. Also the importance of the position of the probes (Raman probe above the vial and NIR probe at the bottom of the sidewall of the vial) in order to obtain all required critical information is outlined. Combining Raman and NIR spectroscopy for the simultaneous monitoring of freeze drying allows monitoring almost all critical freeze drying process aspects. Both techniques do not only complement each other, they also provided mutual confirmation of specific conclusions.
NASA Astrophysics Data System (ADS)
Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng
2017-08-01
Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.
Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.
Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok
2015-01-01
Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.
The Analytic Hierarchy Process and Participatory Decisionmaking
Daniel L. Schmoldt; Daniel L. Peterson; Robert L. Smith
1995-01-01
Managing natural resource lands requires social, as well as biophysical, considerations. Unfortunately, it is extremely difficult to accurately assess and quantify changing social preferences, and to aggregate conflicting opinions held by diverse social groups. The Analytic Hierarchy Process (AHP) provides a systematic, explicit, rigorous, and robust mechanism for...
Using Analytic Hierarchy Process in Textbook Evaluation
ERIC Educational Resources Information Center
Kato, Shigeo
2014-01-01
This study demonstrates the application of the analytic hierarchy process (AHP) in English language teaching materials evaluation, focusing in particular on its potential for systematically integrating different components of evaluation criteria in a variety of teaching contexts. AHP is a measurement procedure wherein pairwise comparisons are made…
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
On integrating Jungian and other theories.
Sedgwick, David
2015-09-01
This paper consists of reflections on some of the processes, subtleties, and 'eros' involved in attempting to integrate Jungian and other analytic perspectives. Assimilation of other theoretical viewpoints has a long history in analytical psychology, beginning when Jung met Freud. Since its inception, the Journal of Analytical Psychology has provided a forum for theoretical syntheses and comparative psychoanalysis. Such attempts at synthesizing other theories represent analytical psychology itself trying to individuate. © 2015, The Society of Analytical Psychology.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Andra, Syam S; Austin, Christine; Yang, Juan; Patel, Dhavalkumar; Arora, Manish
2016-12-01
Human exposures to bisphenol A (BPA) has attained considerable global health attention and represents one of the leading environmental contaminants with potential adverse health effects including endocrine disruption. Current practice of measuring of exposure to BPA includes the measurement of unconjugated BPA (aglycone) and total (both conjugated and unconjugated) BPA; the difference between the two measurements leads to estimation of conjugated forms. However, the measurement of BPA as the end analyte leads to inaccurate estimates from potential interferences from background sources during sample collection and analysis. BPA glucuronides (BPAG) and sulfates (BPAS) represent better candidates for biomarkers of BPA exposure, since they require in vivo metabolism and are not prone to external contamination. In this work, the primary focus was to review the current state of the art in analytical methods available to quantitate BPA conjugates. The entire analytical procedure for the simultaneous extraction and detection of aglycone BPA and conjugates is covered, from sample pre-treatment, extraction, separation, ionization, and detection. Solid phase extraction coupled with liquid chromatograph and tandem mass spectrometer analysis provides the most sensitive detection and quantification of BPA conjugates. Discussed herein are the applications of BPA conjugates analysis in human exposure assessment studies. Measuring these potential biomarkers of BPA exposure has only recently become analytically feasible and there are limitations and challenges to overcome in biomonitoring studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Charge Transport in Spiro-OMeTAD Investigated through Space-Charge-Limited Current Measurements
NASA Astrophysics Data System (ADS)
Röhr, Jason A.; Shi, Xingyuan; Haque, Saif A.; Kirchartz, Thomas; Nelson, Jenny
2018-04-01
Extracting charge-carrier mobilities for organic semiconductors from space-charge-limited conduction measurements is complicated in practice by nonideal factors such as trapping in defects and injection barriers. Here, we show that by allowing the bandlike charge-carrier mobility, trap characteristics, injection barrier heights, and the shunt resistance to vary in a multiple-trapping drift-diffusion model, a numerical fit can be obtained to the entire current density-voltage curve from experimental space-charge-limited current measurements on both symmetric and asymmetric 2 ,2',7 ,7' -tetrakis(N ,N -di-4-methoxyphenylamine)-9 ,9' -spirobifluorene (spiro-OMeTAD) single-carrier devices. This approach yields a bandlike mobility that is more than an order of magnitude higher than the effective mobility obtained using analytical approximations, such as the Mott-Gurney law and the moving-electrode equation. It is also shown that where these analytical approximations require a temperature-dependent effective mobility to achieve fits, the numerical model can yield a temperature-, electric-field-, and charge-carrier-density-independent mobility. Finally, we present an analytical model describing trap-limited current flow through a semiconductor in a symmetric single-carrier device. We compare the obtained charge-carrier mobility and trap characteristics from this analytical model to the results from the numerical model, showing excellent agreement. This work shows the importance of accounting for traps and injection barriers explicitly when analyzing current density-voltage curves from space-charge-limited current measurements.
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
Schorling, Stefan; Schalasta, Gunnar; Enders, Gisela; Zauke, Michael
2004-01-01
The COBAS AmpliPrep instrument (Roche Diagnostics GmbH, D-68305 Mannheim, Germany) automates the entire sample preparation process of nucleic acid isolation from serum or plasma for polymerase chain reaction analysis. We report the analytical performance of the LightCycler Parvovirus B19 Quantification Kit (Roche Diagnostics) using nucleic acids isolated with the COBAS AmpliPrep instrument. Nucleic acids were extracted using the Total Nucleic Acid Isolation Kit (Roche Diagnostics) and amplified with the LightCycler Parvovirus B19 Quantification Kit. The kit combination processes 72 samples per 8-hour shift. The lower detection limit is 234 IU/ml at a 95% hit-rate, linear range approximately 104-1010 IU/ml, and overall precision 16 to 40%. Relative sensitivity and specificity in routine samples from pregnant women are 100% and 93%, respectively. Identification of a persistent parvovirus B19-infected individual by the polymerase chain reaction among 51 anti-parvovirus B19 IgM-negative samples underlines the importance of additional nucleic acid testing in pregnancy and its superiority to serology in identifying the risk of parvovirus B19 transmission via blood or blood products. Combination of the Total Nucleic Acid Isolation Kit on the COBAS AmpliPrep instrument with the LightCycler Parvovirus B19 Quantification Kit provides a reliable and time-saving tool for sensitive and accurate detection of parvovirus B19 DNA. PMID:14736825
A Thermally Powered ISFET Array for On-Body pH Measurement.
Douthwaite, Matthew; Koutsos, Ermis; Yates, David C; Mitcheson, Paul D; Georgiou, Pantelis
2017-12-01
Recent advances in electronics and electrochemical sensors have led to an emerging class of next generation wearables, detecting analytes in biofluids such as perspiration. Most of these devices utilize ion-selective electrodes (ISEs) as a detection method; however, ion-sensitive field-effect transistors (ISFETs) offer a solution with improved integration and a low power consumption. This work presents a wearable, thermoelectrically powered system composed of an application-specific integrated circuit (ASIC), two commercial power management integrated circuits and a network of commercial thermoelectric generators (TEGs). The ASIC is fabricated in 0.35 m CMOS and contains an ISFET array designed to read pH as a current, a processing module which averages the signal to reduce noise and encodes it into a frequency, and a transmitter. The output frequency has a measured sensitivity of 6 to 8 kHz/pH for a pH range of 7-5. It is shown that the sensing array and processing module has a power consumption 6 W and, therefore, can be entirely powered by body heat using a TEG. Array averaging is shown to reduce noise at these low power levels to 104 V (input referred integrated noise), reducing the minimum detectable limit of the ASIC to 0.008 pH units. The work forms the foundation and proves the feasibility of battery-less, on-body electrochemical for perspiration analysis in sports science and healthcare applications.
Analytic and Heuristic Processing Influences on Adolescent Reasoning and Decision-Making.
ERIC Educational Resources Information Center
Klaczynski, Paul A.
2001-01-01
Examined the relationship between age and the normative/descriptive gap--the discrepancy between actual reasoning and traditional standards for reasoning. Found that middle adolescents performed closer to normative ideals than early adolescents. Factor analyses suggested that performance was based on two processing systems, analytic and heuristic…
Functional Analytic Psychotherapy for Interpersonal Process Groups: A Behavioral Application
ERIC Educational Resources Information Center
Hoekstra, Renee
2008-01-01
This paper is an adaptation of Kohlenberg and Tsai's work, Functional Analytical Psychotherapy (1991), or FAP, to group psychotherapy. This author applied a behavioral rationale for interpersonal process groups by illustrating key points with a hypothetical client. Suggestions are also provided for starting groups, identifying goals, educating…
Developmental changes in analytic and holistic processes in face perception.
Joseph, Jane E; DiBartolo, Michelle D; Bhatt, Ramesh S
2015-01-01
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2(nd) order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2(nd) order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6-8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9-11 years) showed an intermediate pattern with a trend toward holistic processing of 2(nd) order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2(nd) order and featural information are incorporated into holistic representations, whereas older children only incorporate 2(nd) order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2(nd) order processing initially then incorporates featural information by adulthood.
A device for automatic photoelectric control of the analytical gap for emission spectrographs
Dietrich, John A.; Cooley, Elmo F.; Curry, Kenneth J.
1977-01-01
A photoelectric device has been built that automatically controls the analytical gap between electrodes during excitation period. The control device allows for precise control of the analytical gap during the arcing process of samples, resulting in better precision of analysis.
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
Bringing Business Intelligence to Health Information Technology Curriculum
ERIC Educational Resources Information Center
Zheng, Guangzhi; Zhang, Chi; Li, Lei
2015-01-01
Business intelligence (BI) and healthcare analytics are the emerging technologies that provide analytical capability to help healthcare industry improve service quality, reduce cost, and manage risks. However, such component on analytical healthcare data processing is largely missed from current healthcare information technology (HIT) or health…
Reading Multimodal Texts: Perceptual, Structural and Ideological Perspectives
ERIC Educational Resources Information Center
Serafini, Frank
2010-01-01
This article presents a tripartite framework for analyzing multimodal texts. The three analytical perspectives presented include: (1) perceptual, (2) structural, and (3) ideological analytical processes. Using Anthony Browne's picturebook "Piggybook" as an example, assertions are made regarding what each analytical perspective brings to the…
An Analysis of Machine- and Human-Analytics in Classification.
Tam, Gary K L; Kothari, Vivek; Chen, Min
2017-01-01
In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.
Microfluidic-Based sample chips for radioactive solutions
Tripp, J. L.; Law, J. D.; Smith, T. E.; ...
2015-01-01
Historical nuclear fuel cycle process sampling techniques required sample volumes ranging in the tens of milliliters. The radiation levels experienced by analytical personnel and equipment, in addition to the waste volumes generated from analysis of these samples, have been significant. These sample volumes also impacted accountability inventories of required analytes during process operations. To mitigate radiation dose and other issues associated with the historically larger sample volumes, a microcapillary sample chip was chosen for further investigation. The ability to obtain microliter volume samples coupled with a remote automated means of sample loading, tracking, and transporting to the analytical instrument wouldmore » greatly improve analytical efficiency while reducing both personnel exposure and radioactive waste volumes. Sample chip testing was completed to determine the accuracy, repeatability, and issues associated with the use of microfluidic sample chips used to supply µL sample volumes of lanthanide analytes dissolved in nitric acid for introduction to an analytical instrument for elemental analysis.« less
Long-Term Stability of Volatile Nitrosamines in Human Urine.
Hodgson, James A; Seyler, Tiffany H; Wang, Lanqing
2016-07-01
Volatile nitrosamines (VNAs) are established teratogens and carcinogens in animals and classified as probable (group 2A) and possible (group 2B) carcinogens in humans by the IARC. High levels of VNAs have been detected in tobacco products and in both mainstream and sidestream smoke. VNA exposure may lead to lipid peroxidation and oxidative stress (e.g., inflammation), chronic diseases (e.g., diabetes) and neurodegenerative diseases (e.g., Alzheimer's disease). To conduct epidemiological studies on the effects of VNA exposure, short-term and long-term stabilities of VNAs in the urine matrix are needed. In this report, the stability of six VNAs (N-nitrosodimethylamine, N-nitrosomethylethylamine, N-nitrosodiethylamine, N-nitrosopiperidine, N-nitrosopyrrolidine and N-nitrosomorpholine) in human urine is analyzed for the first time using in vitro blank urine pools fortified with a standard mixture of all six VNAs. Over a 24-day period, analytes were monitored in samples stored at ∼20°C (collection temperature), 4-10°C (transit temperature) and -20 and -70°C (long-term storage temperatures). All six analytes were stable for 24 days at all temperatures (n = 15). The analytes were then analyzed over a longer time period at -70°C; all analytes were stable for up to 1 year (n = 62). A subset of 44 samples was prepared as a single batch and stored at -20°C, the temperature at which prepared samples are stored. These prepared samples were run in duplicate weekly over 10 weeks, and all six analytes were stable over the entire period (n = 22). Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
An integrated model of clinical reasoning: dual-process theory of cognition and metacognition.
Marcum, James A
2012-10-01
Clinical reasoning is an important component for providing quality medical care. The aim of the present paper is to develop a model of clinical reasoning that integrates both the non-analytic and analytic processes of cognition, along with metacognition. The dual-process theory of cognition (system 1 non-analytic and system 2 analytic processes) and the metacognition theory are used to develop an integrated model of clinical reasoning. In the proposed model, clinical reasoning begins with system 1 processes in which the clinician assesses a patient's presenting symptoms, as well as other clinical evidence, to arrive at a differential diagnosis. Additional clinical evidence, if necessary, is acquired and analysed utilizing system 2 processes to assess the differential diagnosis, until a clinical decision is made diagnosing the patient's illness and then how best to proceed therapeutically. Importantly, the outcome of these processes feeds back, in terms of metacognition's monitoring function, either to reinforce or to alter cognitive processes, which, in turn, enhances synergistically the clinician's ability to reason quickly and accurately in future consultations. The proposed integrated model has distinct advantages over other models proposed in the literature for explicating clinical reasoning. Moreover, it has important implications for addressing the paradoxical relationship between experience and expertise, as well as for designing a curriculum to teach clinical reasoning skills. © 2012 Blackwell Publishing Ltd.
Exhaled breath condensate – from an analytical point of view
Dodig, Slavica; Čepelak, Ivana
2013-01-01
Over the past three decades, the goal of many researchers is analysis of exhaled breath condensate (EBC) as noninvasively obtained sample. A total quality in laboratory diagnostic processes in EBC analysis was investigated: pre-analytical (formation, collection, storage of EBC), analytical (sensitivity of applied methods, standardization) and post-analytical (interpretation of results) phases. EBC analysis is still used as a research tool. Limitations referred to pre-analytical, analytical, and post-analytical phases of EBC analysis are numerous, e.g. low concentrations of EBC constituents, single-analyte methods lack in sensitivity, and multi-analyte has not been fully explored, and reference values are not established. When all, pre-analytical, analytical and post-analytical requirements are met, EBC biomarkers as well as biomarker patterns can be selected and EBC analysis can hopefully be used in clinical practice, in both, the diagnosis and in the longitudinal follow-up of patients, resulting in better outcome of disease. PMID:24266297
Kumar, B Vinodh; Mohan, Thuthi
2018-01-01
Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-01-01
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-12-15
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.
Numerical and analytical simulation of the production process of ZrO2 hollow particles
NASA Astrophysics Data System (ADS)
Safaei, Hadi; Emami, Mohsen Davazdah
2017-12-01
In this paper, the production process of hollow particles from the agglomerated particles is addressed analytically and numerically. The important parameters affecting this process, in particular, the initial porosity level of particles and the plasma gun types are investigated. The analytical model adopts a combination of quasi-steady thermal equilibrium and mechanical balance. In the analytical model, the possibility of a solid core existing in agglomerated particles is examined. In this model, a range of particle diameters (50μm ≤ D_{p0} ≤ 160 μ m) and various initial porosities ( 0.2 ≤ p ≤ 0.7) are considered. The numerical model employs the VOF technique for two-phase compressible flows. The production process of hollow particles from the agglomerated particles is simulated, considering an initial diameter of D_{p0} = 60 μm and initial porosity of p = 0.3, p = 0.5, and p = 0.7. Simulation results of the analytical model indicate that the solid core diameter is independent of the initial porosity, whereas the thickness of the particle shell strongly depends on the initial porosity. In both models, a hollow particle may hardly develop at small initial porosity values ( p < 0.3), while the particle disintegrates at high initial porosity values ( p > 0.6.
Flexible Description and Adaptive Processing of Earth Observation Data through the BigEarth Platform
NASA Astrophysics Data System (ADS)
Gorgan, Dorian; Bacu, Victor; Stefanut, Teodor; Nandra, Cosmin; Mihon, Danut
2016-04-01
The Earth Observation data repositories extending periodically by several terabytes become a critical issue for organizations. The management of the storage capacity of such big datasets, accessing policy, data protection, searching, and complex processing require high costs that impose efficient solutions to balance the cost and value of data. Data can create value only when it is used, and the data protection has to be oriented toward allowing innovation that sometimes depends on creative people, which achieve unexpected valuable results through a flexible and adaptive manner. The users need to describe and experiment themselves different complex algorithms through analytics in order to valorize data. The analytics uses descriptive and predictive models to gain valuable knowledge and information from data analysis. Possible solutions for advanced processing of big Earth Observation data are given by the HPC platforms such as cloud. With platforms becoming more complex and heterogeneous, the developing of applications is even harder and the efficient mapping of these applications to a suitable and optimum platform, working on huge distributed data repositories, is challenging and complex as well, even by using specialized software services. From the user point of view, an optimum environment gives acceptable execution times, offers a high level of usability by hiding the complexity of computing infrastructure, and supports an open accessibility and control to application entities and functionality. The BigEarth platform [1] supports the entire flow of flexible description of processing by basic operators and adaptive execution over cloud infrastructure [2]. The basic modules of the pipeline such as the KEOPS [3] set of basic operators, the WorDeL language [4], the Planner for sequential and parallel processing, and the Executor through virtual machines, are detailed as the main components of the BigEarth platform [5]. The presentation exemplifies the development of some Earth Observation oriented applications based on flexible description of processing, and adaptive and portable execution over Cloud infrastructure. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Gorgan, D., "Flexible and Adaptive Processing of Earth Observation Data over High Performance Computation Architectures", International Conference and Exhibition Satellite 2015, August 17-19, Houston, Texas, USA. [3] Mihon, D., Bacu, V., Colceriu, V., Gorgan, D., "Modeling of Earth Observation Use Cases through the KEOPS System", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 455-460, (2015). [4] Nandra, C., Gorgan, D., "Workflow Description Language for Defining Big Earth Data Processing Tasks", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 461-468, (2015). [5] Bacu, V., Stefan, T., Gorgan, D., "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).
Importance of implementing an analytical quality control system in a core laboratory.
Marques-Garcia, F; Garcia-Codesal, M F; Caro-Narros, M R; Contreras-SanFeliciano, T
2015-01-01
The aim of the clinical laboratory is to provide useful information for screening, diagnosis and monitoring of disease. The laboratory should ensure the quality of extra-analytical and analytical process, based on set criteria. To do this, it develops and implements a system of internal quality control, designed to detect errors, and compare its data with other laboratories, through external quality control. In this way it has a tool to detect the fulfillment of the objectives set, and in case of errors, allowing corrective actions to be made, and ensure the reliability of the results. This article sets out to describe the design and implementation of an internal quality control protocol, as well as its periodical assessment intervals (6 months) to determine compliance with pre-determined specifications (Stockholm Consensus(1)). A total of 40 biochemical and 15 immunochemical methods were evaluated using three different control materials. Next, a standard operation procedure was planned to develop a system of internal quality control that included calculating the error of the analytical process, setting quality specifications, and verifying compliance. The quality control data were then statistically depicted as means, standard deviations, and coefficients of variation, as well as systematic, random, and total errors. The quality specifications were then fixed and the operational rules to apply in the analytical process were calculated. Finally, our data were compared with those of other laboratories through an external quality assurance program. The development of an analytical quality control system is a highly structured process. This should be designed to detect errors that compromise the stability of the analytical process. The laboratory should review its quality indicators, systematic, random and total error at regular intervals, in order to ensure that they are meeting pre-determined specifications, and if not, apply the appropriate corrective actions. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Ishibashi, Midori
2015-01-01
The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.
The electrical properties of zero-gravity processed immiscibles
NASA Technical Reports Server (NTRS)
Lacy, L. L.; Otto, G. H.
1974-01-01
When dispersed or mixed immiscibles are solidified on earth, a large amount of separation of the constituents takes place due to differences in densities. However, when the immiscibles are dispersed and solidified in zero-gravity, density separation does not occur, and unique composite solids can be formed with many new and promising electrical properties. By measuring the electrical resistivity and superconducting critical temperature, Tc, of zero-g processed Ga-Bi samples, it has been found that the electrical properties of such materials are entirely different from the basic constituents and the ground control samples. Our results indicate that space processed immiscible materials may form an entirely new class of electronic materials.
The effect of analytic and experiential modes of thought on moral judgment.
Kvaran, Trevor; Nichols, Shaun; Sanfey, Alan
2013-01-01
According to dual-process theories, moral judgments are the result of two competing processes: a fast, automatic, affect-driven process and a slow, deliberative, reason-based process. Accordingly, these models make clear and testable predictions about the influence of each system. Although a small number of studies have attempted to examine each process independently in the context of moral judgment, no study has yet tried to experimentally manipulate both processes within a single study. In this chapter, a well-established "mode-of-thought" priming technique was used to place participants in either an experiential/emotional or analytic mode while completing a task in which participants provide judgments about a series of moral dilemmas. We predicted that individuals primed analytically would make more utilitarian responses than control participants, while emotional priming would lead to less utilitarian responses. Support was found for both of these predictions. Implications of these findings for dual-process theories of moral judgment will be discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
Accurate mass measurements and their appropriate use for reliable analyte identification.
Godfrey, A Ruth; Brenton, A Gareth
2012-09-01
Accurate mass instrumentation is becoming increasingly available to non-expert users. This data can be mis-used, particularly for analyte identification. Current best practice in assigning potential elemental formula for reliable analyte identification has been described with modern informatic approaches to analyte elucidation, including chemometric characterisation, data processing and searching using facilities such as the Chemical Abstracts Service (CAS) Registry and Chemspider.
Gravitational Waveforms in the Early Inspiral of Binary Black Hole Systems
NASA Astrophysics Data System (ADS)
Barkett, Kevin; Kumar, Prayush; Bhagwat, Swetha; Brown, Duncan; Scheel, Mark; Szilagyi, Bela; Simulating eXtreme Spacetimes Collaboration
2015-04-01
The inspiral, merger and ringdown of compact object binaries are important targets for gravitational wave detection by aLIGO. Detection and parameter estimation will require long, accurate waveforms for comparison. There are a number of analytical models for generating gravitational waveforms for these systems, but the only way to ensure their consistency and correctness is by comparing with numerical relativity simulations that cover many inspiral orbits. We've simulated a number of binary black hole systems with mass ratio 7 and a moderate, aligned spin on the larger black hole. We have attached these numerical waveforms to analytical waveform models to generate long hybrid gravitational waveforms that span the entire aLIGO frequency band. We analyze the robustness of these hybrid waveforms and measure the faithfulness of different hybrids with each other to obtain an estimate on how long future numerical simulations need to be in order to ensure that waveforms are accurate enough for use by aLIGO.
A study of the temporal robustness of the growing global container-shipping network
Wang, Nuo; Wu, Nuan; Dong, Ling-ling; Yan, Hua-kun; Wu, Di
2016-01-01
Whether they thrive as they grow must be determined for all constantly expanding networks. However, few studies have focused on this important network feature or the development of quantitative analytical methods. Given the formation and growth of the global container-shipping network, we proposed the concept of network temporal robustness and quantitative method. As an example, we collected container liner companies’ data at two time points (2004 and 2014) and built a shipping network with ports as nodes and routes as links. We thus obtained a quantitative value of the temporal robustness. The temporal robustness is a significant network property because, for the first time, we can clearly recognize that the shipping network has become more vulnerable to damage over the last decade: When the node failure scale reached 50% of the entire network, the temporal robustness was approximately −0.51% for random errors and −12.63% for intentional attacks. The proposed concept and analytical method described in this paper are significant for other network studies. PMID:27713549
NASA Technical Reports Server (NTRS)
Mule, Peter; Hill, Michael D.; Sampler, Henry P.
2000-01-01
The Microwave Anisotropy Probe (MAP) Observatory, scheduled for a fall 2000 launch, is designed to measure temperature fluctuations (anisotropy) and produce a high sensitivity and high spatial resolution (better than 0.3 deg.) map of the cosmic microwave background (CMB) radiation over the entire sky between 22 and 90 GHz. MAP utilizes back-to-back composite Gregorian telescopes supported on a composite truss structure to focus the microwave signals into 10 differential microwave receivers. Proper position and shape of the telescope reflectors at the operating temperature of approximately 90 K is a critical element to ensuring mission success. We describe the methods and analysis used to validate the in-flight position and shape predictions for the reflectors based on photogrammetric (PG) metrology data taken under vacuum with the reflectors at approximately 90 K. Contour maps showing reflector distortion analytical extrapolations were generated. The resulting reflector distortion data are shown to be crucial to the analytical assessment of the MAP instrument's microwave system in-flight performance.
Science-Driven Computing: NERSC's Plan for 2006-2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, Horst D.; Kramer, William T.C.; Bailey, David H.
NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise ofmore » the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.« less
Multi-scale simulations of droplets in generic time-dependent flows
NASA Astrophysics Data System (ADS)
Milan, Felix; Biferale, Luca; Sbragaglia, Mauro; Toschi, Federico
2017-11-01
We study the deformation and dynamics of droplets in time-dependent flows using a diffuse interface model for two immiscible fluids. The numerical simulations are at first benchmarked against analytical results of steady droplet deformation, and further extended to the more interesting case of time-dependent flows. The results of these time-dependent numerical simulations are compared against analytical models available in the literature, which assume the droplet shape to be an ellipsoid at all times, with time-dependent major and minor axis. In particular we investigate the time-dependent deformation of a confined droplet in an oscillating Couette flow for the entire capillary range until droplet break-up. In this way these multi component simulations prove to be a useful tool to establish from ``first principles'' the dynamics of droplets in complex flows involving multiple scales. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 642069. & European Research Council under the European Community's Seventh Framework Program, ERC Grant Agreement No 339032.
Hydrodynamic effects in a misaligned radial face seal
NASA Technical Reports Server (NTRS)
Etsion, I.
1978-01-01
Hydrodynamic effects in a flat seal having an angular misalignment are analyzed, taking into account the radial variation in seal clearance. An analytical solution for axial force, restoring moment, and transverse moment is presented that covers the whole range from zero to full angular misalignment. Both low pressure seals with cavitating flow and high pressure seals with full fluid film are considered. Strong coupling is demonstrated between angular misalignment and transverse moment which leads the misalignment vector by 90 degrees. This transverse moment, which is entirely due to hydrodynamic effects, may be a significant factor in seal operating mechanism.
1983-05-01
orre mosdismiel which uses& that pre-lictioa ram be maim umima eel Us have been developing a general solution for self -caoaiteot set of metric or...meet pert, analytical Stu-si dies of the Somliear respose of reio- forced costrate structures have bat At present, multi-dimemslosel aa focused, by s...quantities is avail- able and the applications are limited in 6 this respect. However, the entire develop- ment is " self -correcting" in the sense that 4 as
Hydrodynamic effects in a misaligned radial face seal
NASA Technical Reports Server (NTRS)
Etsion, I.
1977-01-01
Hydrodynamic effects in a flat seal having an angular misalignment are analyzed, taking into account the radial variation in seal clearance. An analytical solution for axial force, restoring moment, and transverse moment is presented that covers the whole range from zero to full angular misalignment. Both low pressure seals with cavitating flow and high pressure seals with full fluid film are considered. Strong coupling is demonstrated between angular misalignment and transverse moment which leads the misalignment vector by 90 degrees. This transverse moment, which is entirely due to hydrodynamic effects, is a significant factor in the seal operating mechanism.
NASA/FAA Tailplane Icing Program Overview
NASA Technical Reports Server (NTRS)
Ratvasky, Thomas P.; VanZante, Judith Foss; Riley, James T.
1999-01-01
The effects of tailplane icing were investigated in a four-year NASA/FAA Tailplane Icing, Program (TIP). This research program was developed to improve the understanding, of iced tailplane aeroperformance and aircraft aerodynamics, and to develop design and training aides to help reduce the number of incidents and accidents caused by tailplane icing. To do this, the TIP was constructed with elements that included icing, wind tunnel testing, dry-air aerodynamic wind tunnel testing, flight tests, and analytical code development. This paper provides an overview of the entire program demonstrating the interconnectivity of the program elements and reports on current accomplishments.
Vibration and damping of laminated, composite-material plates including thickness-shear effects
NASA Technical Reports Server (NTRS)
Bert, C. W.; Siu, C. C.
1972-01-01
An analytical investigation of sinusoidally forced vibration of laminated, anisotropic plates including bending-stretching coupling, thickness-shear flexibility, all three types of inertia effects, and material damping is presented. In the analysis the effects of thickness-shear deformation are considered by the use of a shear correction factor K, analogous to that used by Mindlin for homogeneous plates. Two entirely different approaches for calculating the thickness-shear factor for a laminate are presented. Numerical examples indicate that the value of K depends on the layer properties and the stacking sequence of the laminate.
Optimization of magnet end-winding geometry
NASA Astrophysics Data System (ADS)
Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.
1994-03-01
A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.
Patterned gallium surfaces as molecular mirrors.
Bossi, Alessandra; Rivetti, Claudio; Mangiarotti, Laura; Whitcombe, Michael J; Turner, Anthony P F; Piletsky, Sergey A
2007-09-30
An entirely new means of printing molecular information on a planar film, involving casting nanoscale impressions of the template protein molecules in molten gallium, is presented here for the first time. The metallic imprints not only replicate the shape and size of the proteins used as template. They also show specific binding for the template species. Such a simple approach to the creation of antibody-like properties in metallic mirrors can lead to applications in separations, microfluidic devices, and the development of new optical and electronic sensors, and will be of interest to chemists, materials scientists, analytical specialists, and electronic engineers.
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
Analytical Applications of NMR: Summer Symposium on Analytical Chemistry.
ERIC Educational Resources Information Center
Borman, Stuart A.
1982-01-01
Highlights a symposium on analytical applications of nuclear magnetic resonance spectroscopy (NMR), discussing pulse Fourier transformation technique, two-dimensional NMR, solid state NMR, and multinuclear NMR. Includes description of ORACLE, an NMR data processing system at Syracuse University using real-time color graphics, and algorithms for…
Features Students Really Expect from Learning Analytics
ERIC Educational Resources Information Center
Schumacher, Clara; Ifenthaler, Dirk
2016-01-01
In higher education settings more and more learning is facilitated through online learning environments. To support and understand students' learning processes better, learning analytics offers a promising approach. The purpose of this study was to investigate students' expectations toward features of learning analytics systems. In a first…
Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience
Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK
2015-01-01
Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569
Text-based Analytics for Biosurveillance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah
The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less
Wada, Kazushige; Nittono, Hiroshi
2004-06-01
The reasoning process in the Wason selection task was examined by measuring card inspection times in the letter-number and drinking-age problems. 24 students were asked to solve the problems presented on a computer screen. Only the card touched with a mouse pointer was visible, and the total exposure time of each card was measured. Participants were allowed to cancel their previous selections at any time. Although rethinking was encouraged, the cards once selected were rarely cancelled (10% of the total selections). Moreover, most of the cancelled cards were reselected (89% of the total cancellations). Consistent with previous findings, inspection times were longer for selected cards than for nonselected cards. These results suggest that card selections are determined largely by initial heuristic processes and rarely reversed by subsequent analytic processes. The present study gives further support for the heuristic-analytic dual process theory.
STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less
STATISTICAL ANALYSIS OF TANK 19F FLOOR SAMPLE RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less
Analytical and Clinical Performance Evaluation of the Abbott Architect PIVKA Assay.
Ko, Dae-Hyun; Hyun, Jungwon; Kim, Hyun Soo; Park, Min-Jeong; Kim, Jae-Seok; Park, Ji-Young; Shin, Dong Hoon; Cho, Hyoun Chan
2018-01-01
Protein induced by vitamin K absence (PIVKA) is measured using various assays and is used to help diagnose hepatocellular carcinoma. The present study evaluated the analytical and clinical performances of the recently released Abbott Architect PIVKA assay. Precision, linearity, and correlation tests were performed in accordance with the Clinical Laboratory Standardization Institute guidelines. Sample type suitability was assessed using serum and plasma samples from the same patients, and the reference interval was established using sera from 204 healthy individuals. The assay had coefficients of variation of 3.2-3.5% and intra-laboratory variation of 3.6-5.5%. Linearity was confirmed across the entire measurable range. The Architect PIVKA assay was comparable to the Lumipulse PIVKA assay, and the plasma and serum samples provided similar results. The lower reference limit was 13.0 mAU/mL and the upper reference limit was 37.4 mAU/mL. The ability of the Architect PIVKA assay to detect hepatocellular carcinoma was comparable to that of the alpha-fetoprotein test and the Lumipulse PIVKA assay. The Architect PIVKA assay provides excellent analytical and clinical performance, is simple for clinical laboratories to adopt, and has improved sample type suitability that could broaden the assay's utility. © 2018 by the Association of Clinical Scientists, Inc.
Analytical-HZETRN Model for Rapid Assessment of Active Magnetic Radiation Shielding
NASA Technical Reports Server (NTRS)
Washburn, S. A.; Blattnig, S. R.; Singleterry, R. C.; Westover, S. C.
2014-01-01
The use of active radiation shielding designs has the potential to reduce the radiation exposure received by astronauts on deep-space missions at a significantly lower mass penalty than designs utilizing only passive shielding. Unfortunately, the determination of the radiation exposure inside these shielded environments often involves lengthy and computationally intensive Monte Carlo analysis. In order to evaluate the large trade space of design parameters associated with a magnetic radiation shield design, an analytical model was developed for the determination of flux inside a solenoid magnetic field due to the Galactic Cosmic Radiation (GCR) radiation environment. This analytical model was then coupled with NASA's radiation transport code, HZETRN, to account for the effects of passive/structural shielding mass. The resulting model can rapidly obtain results for a given configuration and can therefore be used to analyze an entire trade space of potential variables in less time than is required for even a single Monte Carlo run. Analyzing this trade space for a solenoid magnetic shield design indicates that active shield bending powers greater than 15 Tm and passive/structural shielding thicknesses greater than 40 g/cm2 have a limited impact on reducing dose equivalent values. Also, it is shown that higher magnetic field strengths are more effective than thicker magnetic fields at reducing dose equivalent.
An Analytical Model for the Evolution of the Protoplanetary Disks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khajenabi, Fazeleh; Kazrani, Kimia; Shadmehri, Mohsen, E-mail: f.khajenabi@gu.ac.ir
We obtain a new set of analytical solutions for the evolution of a self-gravitating accretion disk by holding the Toomre parameter close to its threshold and obtaining the stress parameter from the cooling rate. In agreement with the previous numerical solutions, furthermore, the accretion rate is assumed to be independent of the disk radius. Extreme situations where the entire disk is either optically thick or optically thin are studied independently, and the obtained solutions can be used for exploring the early or the final phases of a protoplanetary disk evolution. Our solutions exhibit decay of the accretion rate as amore » power-law function of the age of the system, with exponents −0.75 and −1.04 for optically thick and thin cases, respectively. Our calculations permit us to explore the evolution of the snow line analytically. The location of the snow line in the optically thick regime evolves as a power-law function of time with the exponent −0.16; however, when the disk is optically thin, the location of the snow line as a function of time with the exponent −0.7 has a stronger dependence on time. This means that in an optically thin disk inward migration of the snow line is faster than an optically thick disk.« less
Big data analytics for the Future Circular Collider reliability and availability studies
NASA Astrophysics Data System (ADS)
Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter
2017-10-01
Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.
2015-10-01
Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.
Interactive Management and Updating of Spatial Data Bases
NASA Technical Reports Server (NTRS)
French, P.; Taylor, M.
1982-01-01
The decision making process, whether for power plant siting, load forecasting or energy resource planning, invariably involves a blend of analytical methods and judgement. Management decisions can be improved by the implementation of techniques which permit an increased comprehension of results from analytical models. Even where analytical procedures are not required, decisions can be aided by improving the methods used to examine spatially and temporally variant data. How the use of computer aided planning (CAP) programs and the selection of a predominant data structure, can improve the decision making process is discussed.
Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python
NASA Astrophysics Data System (ADS)
Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor
2017-04-01
As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via ODBC or JDBC respectively, 2) an instance to represent the spatial data stored in the database as a dataframe in Python, called the IdaGeoDataFrame, with a specific geometry attribute which recognises a planar geometry column in dashDB and 3) Python wrappers for spatial functions like within, distance, area, buffer} and more which dashDB currently supports to make the querying process from Python much simpler for the users. The spatial functions translate well-known geopandas-like syntax into SQL queries utilising the database connection to perform spatial operations in-database and can operate on single geometries as well two different geometries from different IdaGeoDataFrames. The in-database queries strictly follow the standards of OpenGIS Implementation Specification for Geographic information - Simple feature access for SQL. The results of the operations obtained can thereby be accessed dynamically via interactive Jupyter notebooks from any system which supports Python, without any additional dependencies and can also be combined with other open source libraries such as matplotlib and folium in-built within Jupyter notebooks for visualization purposes. We built a use case to analyse crime hotspots in New York city to validate our implementation and visualized the results as a choropleth map for each borough.
Durning, Steven J; Dong, Ting; Artino, Anthony R; van der Vleuten, Cees; Holmboe, Eric; Schuwirth, Lambert
2015-08-01
An ongoing debate exists in the medical education literature regarding the potential benefits of pattern recognition (non-analytic reasoning), actively comparing and contrasting diagnostic options (analytic reasoning) or using a combination approach. Studies have not, however, explicitly explored faculty's thought processes while tackling clinical problems through the lens of dual process theory to inform this debate. Further, these thought processes have not been studied in relation to the difficulty of the task or other potential mediating influences such as personal factors and fatigue, which could also be influenced by personal factors such as sleep deprivation. We therefore sought to determine which reasoning process(es) were used with answering clinically oriented multiple-choice questions (MCQs) and if these processes differed based on the dual process theory characteristics: accuracy, reading time and answering time as well as psychometrically determined item difficulty and sleep deprivation. We performed a think-aloud procedure to explore faculty's thought processes while taking these MCQs, coding think-aloud data based on reasoning process (analytic, nonanalytic, guessing or combination of processes) as well as word count, number of stated concepts, reading time, answering time, and accuracy. We also included questions regarding amount of work in the recent past. We then conducted statistical analyses to examine the associations between these measures such as correlations between frequencies of reasoning processes and item accuracy and difficulty. We also observed the total frequencies of different reasoning processes in the situations of getting answers correctly and incorrectly. Regardless of whether the questions were classified as 'hard' or 'easy', non-analytical reasoning led to the correct answer more often than to an incorrect answer. Significant correlations were found between self-reported recent number of hours worked with think-aloud word count and number of concepts used in the reasoning but not item accuracy. When all MCQs were included, 19 % of the variance of correctness could be explained by the frequency of expression of these three think-aloud processes (analytic, nonanalytic, or combined). We found evidence to support the notion that the difficulty of an item in a test is not a systematic feature of the item itself but is always a result of the interaction between the item and the candidate. Use of analytic reasoning did not appear to improve accuracy. Our data suggest that individuals do not apply either System 1 or System 2 but instead fall along a continuum with some individuals falling at one end of the spectrum.
Modeling Choice Under Uncertainty in Military Systems Analysis
1991-11-01
operators rather than fuzzy operators. This is suggested for further research. 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) In AHP , objectives, functions and...14 4.1 IMPRECISELY SPECIFIED MULTIPLE A’ITRIBUTE UTILITY THEORY... 14 4.2 FUZZY DECISION ANALYSIS...14 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) ................................... 14 4.4 SUBJECTIVE TRANSFER FUNCTION APPROACH
Optimizing an Immersion ESL Curriculum Using Analytic Hierarchy Process
ERIC Educational Resources Information Center
Tang, Hui-Wen Vivian
2011-01-01
The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative…
Understanding Customer Product Choices: A Case Study Using the Analytical Hierarchy Process
Robert L. Smith; Robert J. Bush; Daniel L. Schmoldt
1996-01-01
The Analytical Hierarchy Process (AHP) was used to characterize the bridge material selection decisions of highway officials across the United States. Understanding product choices by utilizing the AHP allowed us to develop strategies for increasing the use of timber in bridge construction. State Department of Transportation engineers, private consulting engineers, and...
Literature Review on Processing and Analytical Methods for ...
Report The purpose of this report was to survey the open literature to determine the current state of the science regarding the processing and analytical methods currently available for recovery of F. tularensis from water and soil matrices, and to determine what gaps remain in the collective knowledge concerning F. tularensis identification from environmental samples.
This paper describes a systematic method for comparing options for the long-term management of surplus elemental mercury in the U.S., using the Analytic Hierarchy Process (AHP) as embodied in commercially available Expert Choice software. A limited scope multi-criteria decision-a...
Finley, Anna J; Tang, David; Schmeichel, Brandon J
2015-01-01
Prior research has found that persons who favor more analytic modes of thought are less religious. We propose that individual differences in analytic thought are associated with reduced religious beliefs particularly when analytic thought is measured (hence, primed) first. The current study provides a direct replication of prior evidence that individual differences in analytic thinking are negatively related to religious beliefs when analytic thought is measured before religious beliefs. When religious belief is measured before analytic thinking, however, the negative relationship is reduced to non-significance, suggesting that the link between analytic thought and religious belief is more tenuous than previously reported. The current study suggests that whereas inducing analytic processing may reduce religious belief, more analytic thinkers are not necessarily less religious. The potential for measurement order to inflate the inverse correlation between analytic thinking and religious beliefs deserves additional consideration.
Finley, Anna J.; Tang, David; Schmeichel, Brandon J.
2015-01-01
Prior research has found that persons who favor more analytic modes of thought are less religious. We propose that individual differences in analytic thought are associated with reduced religious beliefs particularly when analytic thought is measured (hence, primed) first. The current study provides a direct replication of prior evidence that individual differences in analytic thinking are negatively related to religious beliefs when analytic thought is measured before religious beliefs. When religious belief is measured before analytic thinking, however, the negative relationship is reduced to non-significance, suggesting that the link between analytic thought and religious belief is more tenuous than previously reported. The current study suggests that whereas inducing analytic processing may reduce religious belief, more analytic thinkers are not necessarily less religious. The potential for measurement order to inflate the inverse correlation between analytic thinking and religious beliefs deserves additional consideration. PMID:26402334
NASA Astrophysics Data System (ADS)
Tahri, Meryem; Maanan, Mohamed; Hakdaoui, Mustapha
2016-04-01
This paper shows a method to assess the vulnerability of coastal risks such as coastal erosion or submarine applying Fuzzy Analytic Hierarchy Process (FAHP) and spatial analysis techniques with Geographic Information System (GIS). The coast of the Mohammedia located in Morocco was chosen as the study site to implement and validate the proposed framework by applying a GIS-FAHP based methodology. The coastal risk vulnerability mapping follows multi-parametric causative factors as sea level rise, significant wave height, tidal range, coastal erosion, elevation, geomorphology and distance to an urban area. The Fuzzy Analytic Hierarchy Process methodology enables the calculation of corresponding criteria weights. The result shows that the coastline of the Mohammedia is characterized by a moderate, high and very high level of vulnerability to coastal risk. The high vulnerability areas are situated in the east at Monika and Sablette beaches. This technical approach is based on the efficiency of the Geographic Information System tool based on Fuzzy Analytical Hierarchy Process to help decision maker to find optimal strategies to minimize coastal risks.
Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.
Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime
2017-10-01
Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.
Design Concepts for an Outage Control Center Information Dashboard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hugo, Jacques Victor; St Germain, Shawn Walter; Thompson, Cheradan Jo
The nuclear industry, and the business world in general, is facing a rapidly increasing amount of data to be dealt with on a daily basis. In the last two decades, the steady improvement of data storage devices and means to create and collect data along the way influenced the manner in which we deal with information. Most data is still stored without filtering and refinement for later use. Many functions at a nuclear power plant generate vast amounts of data, with scheduled and unscheduled outages being a prime example of a source of some of the most complex data setsmore » at the plant. To make matters worse, modern information and communications technology is making it possible to collect and store data faster than our ability to use it for making decisions. However, in most applications, especially outages, raw data has no value in itself; instead, managers, engineers and other specialists want to extract the information contained in it. The complexity and sheer volume of data could lead to information overload, resulting in getting lost in data that may be irrelevant to the task at hand, processed in an inappropriate way, or presented in an ineffective way. To prevent information overload, many data sources are ignored so production opportunities are lost because utilities lack the ability to deal with the enormous data volumes properly. Decision-makers are often confronted with large amounts of disparate, conflicting and dynamic information, which are available from multiple heterogeneous sources. Information and communication technologies alone will not solve this problem. Utilities need effective methods to exploit and use the hidden opportunities and knowledge residing in unexplored data resources. Superior performance before, during and after outages depends upon the right information being available at the right time to the right people. Acquisition of raw data is the easy part; instead, it is the ability to use advanced analytical, data processing and data visualization methods to turn the data into reliable information and comprehensible, actionable information. Techniques like data mining, filtering and analysis only work reliably for well-defined and well-understood problems. The path from data to decision is more complex. The ability to communicate knowledge during outages and emergent issues is crucial. This paper presents an approach to turn the unused data into an opportunity: applying principles from semiotics, human factors and visual analytics to transform the traditional way of processing outage data into media that will improve the collective situation awareness, knowledge, decisions, actions and overall performance of the entire outage team, and also support the reliability, quality and overall effectiveness of maintenance work. The application of the proposed visualization methods will become the medium of a semi-automated analytical process where humans and machines cooperate using their respective, distinct capabilities for the most effective results.« less
Understanding Business Analytics Success and Impact: A Qualitative Study
ERIC Educational Resources Information Center
Parks, Rachida F.; Thambusamy, Ravi
2017-01-01
Business analytics is believed to be a huge boon for organizations since it helps offer timely insights over the competition, helps optimize business processes, and helps generate growth and innovation opportunities. As organizations embark on their business analytics initiatives, many strategic questions, such as how to operationalize business…
FIRST FLOOR PLAN OF REMOTE ANALYTICAL FACILITY (CPP627) SHOWING REMOTE ...
FIRST FLOOR PLAN OF REMOTE ANALYTICAL FACILITY (CPP-627) SHOWING REMOTE ANALYTICAL LABORATORY, DECONTAMINATION ROOM, AND MULTICURIE CELL ROOM. INL DRAWING NUMBER 200-0627-00-008-105065. ALTERNATE ID NUMBER 4272-14-102. - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Bauer, J. R.; Rose, K.; Romeo, L.; Barkhurst, A.; Nelson, J.; Duran-Sesin, R.; Vielma, J.
2016-12-01
Efforts to prepare for and reduce the risk of hazards, from both natural and anthropogenic sources, which threaten our oceans and coasts requires an understanding of the dynamics and interactions between the physical, ecological, and socio-economic systems. Understanding these coupled dynamics are essential as offshore oil & gas exploration and production continues to push into harsher, more extreme environments where risks and uncertainty increase. However, working with these large, complex data from various sources and scales to assess risks and potential impacts associated with offshore energy exploration and production poses several challenges to research. In order to address these challenges, an integrated assessment model (IAM) was developed at the Department of Energy's (DOE) National Energy Technology Laboratory (NETL) that combines spatial data infrastructure and an online research platform to manage, process, analyze, and share these large, multidimensional datasets, research products, and the tools and models used to evaluate risk and reduce uncertainty for the entire offshore system, from the subsurface, through the water column, to coastal ecosystems and communities. Here, we will discuss the spatial data infrastructure and online research platform, NETL's Energy Data eXchange (EDX), that underpin the offshore IAM, providing information on how the framework combines multidimensional spatial data and spatio-temporal tools to evaluate risks to the complex matrix of potential environmental, social, and economic impacts stemming from modeled offshore hazard scenarios, such as oil spills or hurricanes. In addition, we will discuss the online analytics, tools, and visualization methods integrated into this framework that support availability and access to data, as well as allow for the rapid analysis and effective communication of analytical results to aid a range of decision-making needs.
NASA Astrophysics Data System (ADS)
Qian, Zhao Sheng; Shan, Xiao Yue; Chai, Lu Jing; Chen, Jian Rong; Feng, Hui
2014-10-01
Convenient and simultaneous detection of multiple biomarkers such as DNA and proteins with biocompatible materials and good analytical performance still remains a challenge. Herein, we report the respective and simultaneous detection of DNA and bovine α-thrombin (thrombin) entirely based on biocompatible carbon materials through a specially designed fluorescence on-off-on process. Colorful fluorescence, high emission efficiency, good photostability and excellent compatibility enables graphene quantum dots (GQDs) as the best choice for fluorophores in bioprobes, and thus two-colored GQDs as labeling fluorophores were chemically bonded with specific oligonucleotide sequence and aptamer to prepare two probes targeting the DNA and thrombin, respectively. Each probe can be assembled on the graphene oxide (GO) platform spontaneously by π-π stacking and electrostatic attraction; as a result, fast electron transfer in the assembly efficiently quenches the fluorescence of probe. The presence of DNA or thrombin can trigger the self-recognition between capturing a nucleotide sequence and its target DNA or between thrombin and its aptamer due to their specific hybridization and duplex DNA structures or the formation of apatamer-substrate complex, which is taken advantage of in order to achieve a separate quantitative analysis of DNA and thrombin. A dual-functional biosensor for simultaneous detection of DNA and thrombin was also constructed by self-assembly of two probes with distinct colors and GO platform, and was further evaluated with the presence of various concentrations of DNA and thrombin. Both biosensors serving as a general detection model for multiple species exhibit outstanding analytical performance, and are expected to be applied in vivo because of the excellent biocompatibility of their used materials.
NASA Astrophysics Data System (ADS)
Block, K. A.; Randel, C.; Ismail, A.; Palumbo, R. V.; Cai, Y.; Carter, M.; Lehnert, K.
2016-12-01
Most geologic samples of New York City (NYC) have been collected during city construction projects. Studies of these samples are essential for our understanding of the local geology as well as the tectonic processes that shaped the entire Appalachian region. Among these is a suite of rare high-grade granulite samples collected during the construction of the Brooklyn-Queens section of NYC Water Tunnel #3 have been resting dormant in the basement of the City College of New York (CCNY), studied by a small group of investigators with institutional knowledge, but largely undiscoverable and inaccessible to the broader scientific community. Data derived from these samples remain in disparate places, at best in analog format in publications or theses or, at worst, in spreadsheets stored on local machines or on old media, such as CDs and even floppy disks. As part of the Interdisciplinary Earth Data Alliance - CCNY joint internship program, 3 undergraduate students inventoried hundreds of samples and archived sample metadata in the System for Earth Sample Registration (SESAR), a sample metadata registry. Upon registration, each sample was assigned an International GeoSample Number (IGSN) ‒ a globally-unique and persistent identifier that allows unambiguous citation of samples and linking of disparate analytical data across the literature. The students also compiled geochemical analyses, thin-section images, and associated analytical metadata for publication in the EarthChem Library, where the dataset will be openly and persistently accessible and citable via a DOI (Digital Object Identifier). Not only did the internship result in the illumination of countless dark samples and data values, but it also provided the students with valuable lessons in responsible sample and data management, training that should serve them well in their future scientific endeavors.
De Paolis, Annalisa; Bikson, Marom; Nelson, Jeremy T; de Ru, J Alexander; Packer, Mark; Cardoso, Luis
2017-06-01
Hearing is an extremely complex phenomenon, involving a large number of interrelated variables that are difficult to measure in vivo. In order to investigate such process under simplified and well-controlled conditions, models of sound transmission have been developed through many decades of research. The value of modeling the hearing system is not only to explain the normal function of the hearing system and account for experimental and clinical observations, but to simulate a variety of pathological conditions that lead to hearing damage and hearing loss, as well as for development of auditory implants, effective ear protections and auditory hazard countermeasures. In this paper, we provide a review of the strategies used to model the auditory function of the external, middle, inner ear, and the micromechanics of the organ of Corti, along with some of the key results obtained from such modeling efforts. Recent analytical and numerical approaches have incorporated the nonlinear behavior of some parameters and structures into their models. Few models of the integrated hearing system exist; in particular, we describe the evolution of the Auditory Hazard Assessment Algorithm for Human (AHAAH) model, used for prediction of hearing damage due to high intensity sound pressure. Unlike the AHAAH model, 3D finite element models of the entire hearing system are not able yet to predict auditory risk and threshold shifts. It is expected that both AHAAH and FE models will evolve towards a more accurate assessment of threshold shifts and hearing loss under a variety of stimuli conditions and pathologies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Analysis of 34S in Individual Organic Compounds by Coupled GC-ICP-MS
NASA Astrophysics Data System (ADS)
Sessions, A. L.; Amrani, A.; Adkins, J. F.
2009-12-01
The abundances of 2H, 13C, and 15N in organic compounds have been extremely useful in many aspects of biogeochemistry. While sulfur plays an equally important role in many earth-surface processes, the isotopes of sulfur in organic matter have not been extensively employed in large part because there has been no direct route to the analysis of 34S in individual organic compounds. To remedy this, we have developed a highly sensitive and robust method for the analysis of 34S in individual organic compounds by coupled gas chromatography (GC) and multicollector inductively-coupled plasma mass spectrometry (ICP-MS). Isobaric interference from O2+ is minimized by employing dry plasma conditions, and is cleanly resolved at all masses using medium resolution on the Thermo Neptune ICP-MS. Correction for mass bias is accomplished using standard-sample bracketing with peaks of SF6 reference gas. The precision of measured δ34S values approaches 0.1‰ for analytes containing >40 pmol S, and is better than 0.5‰ for those containing as little as 6 pmol S. External accuracy is better than 0.3‰. Integrating only the center of chromatographic peaks, rather than the entire peak, offers significant gain in precision and chromatographic resolution with minimal effect on accuracy, but requires further study for verification as a routine method. Coelution of organic compounds that do not contain S can cause degraded analytical precision and accuracy. As a demonstration of the potential for this new method, we will present data from 3 sample types: individual organosulfur compounds from crude oil, dimethyl sulfide from seawater, and trace H2S from bacterial culture headspace.
On the role of adhesion in single-file dynamics
NASA Astrophysics Data System (ADS)
Fouad, Ahmed M.; Noel, John A.
2017-08-01
For a one-dimensional interacting system of Brownian particles with hard-core interactions (a single-file model), we study the effect of adhesion on both the collective diffusion (diffusion of the entire system with respect to its center of mass) and the tracer diffusion (diffusion of the individual tagged particles). For the case with no adhesion, all properties of these particle systems that are independent of particle labeling (symmetric in all particle coordinates and velocities) are identical to those of non-interacting particles (Lebowitz and Percus, 1967). We clarify this last fact twice. First, we derive our analytical predictions that show that the probability-density functions of single-file (ρsf) and ordinary (ρord) diffusion are identical, ρsf =ρord, predicting a nonanomalous (ordinary) behavior for the collective single-file diffusion, where the average second moment with respect to the center of mass, < x(t) 2 > , is calculated from ρ for both diffusion processes. Second, for single-file diffusion, we show, both analytically and through large-scale simulations, that < x(t) 2 > grows linearly with time, confirming the nonanomalous behavior. This nonanomalous collective behavior comes in contrast to the well-known anomalous sub-diffusion behavior of the individual tagged particles (Harris, 1965). We introduce adhesion to single-file dynamics as a second inter-particle interaction rule and, interestingly, we show that adding adhesion does reduce the magnitudes of both < x(t) 2 > and the mean square displacement per particle Δx2; but the diffusion behavior remains intact independent of adhesion in both cases. Moreover, we study the dependence of both the collective diffusion constant D and the tracer diffusion constant DT on the adhesion coefficient α.
Reversible entrapment of plasmid deoxyribonucleic acid on different chromatographic supports.
Gabor, Boštjan; Černigoj, Urh; Barut, Miloš; Štrancar, Aleš
2013-10-11
HPLC based analytical assay is a powerful technique that can be used to efficiently monitor plasmid DNA (pDNA) purity and quantity throughout the entire purification process. Anion exchange monolithic and non-porous particle based stationary phases were used to study the recovery of the different pDNA isoforms from the analytical column. Three differently sized pDNA molecules of 3.0kbp, 5.2kbp and 14.0kbp were used. Plasmid DNA was injected onto columns under the binding conditions and the separation of the isoforms took place by increasing the ionic strength of the elution buffer. While there was no substantial decrease of the recovered supercoiled and linear isoforms of the pDNA with the increase of the plasmid size and with the increase of the flow rate (recoveries in all cases larger than 75%), a pronounced decrease of the oc isoform recovery was observed. The entrapment of the oc pDNA isoform occurred under non-binding conditions as well. The partial oc isoform elution from the column could be achieved by decreasing the flow rate of the elution mobile phase. The results suggested a reversible entrapment of the oc isoform in the restrictions within the pores of the monolithic material as well as within the intra-particle space of the non-porous particles. This phenomenon was observed on both types of the stationary phase morphologies and could only be connected to the size of a void space through which the pDNA needs to migrate. A prediction of reversible pDNA entrapment was successfully estimated with the calculation of Peclet numbers, Pe, which defines the ratio between a convective and diffusive mass transport. Copyright © 2013 Elsevier B.V. All rights reserved.
Merging OLTP and OLAP - Back to the Future
NASA Astrophysics Data System (ADS)
Lehner, Wolfgang
When the terms "Data Warehousing" and "Online Analytical Processing" were coined in the 1990s by Kimball, Codd, and others, there was an obvious need for separating data and workload for operational transactional-style processing and decision-making implying complex analytical queries over large and historic data sets. Large data warehouse infrastructures have been set up to cope with the special requirements of analytical query answering for multiple reasons: For example, analytical thinking heavily relies on predefined navigation paths to guide the user through the data set and to provide different views on different aggregation levels.Multi-dimensional queries exploiting hierarchically structured dimensions lead to complex star queries at a relational backend, which could hardly be handled by classical relational systems.
Design and Analysis of a Preconcentrator for the ChemLab
DOE Office of Scientific and Technical Information (OSTI.GOV)
WONG,CHUNGNIN C.; FLEMMING,JEB H.; MANGINELL,RONALD P.
2000-07-17
Preconcentration is a critical analytical procedure when designing a microsystem for trace chemical detection, because it can purify a sample mixture and boost the small analyte concentration to a much higher level allowing a better analysis. This paper describes the development of a micro-fabricated planar preconcentrator for the {mu}ChemLab{trademark} at Sandia. To guide the design, an analytical model to predict the analyte transport, adsorption and resorption process in the preconcentrator has been developed. Experiments have also been conducted to analyze the adsorption and resorption process and to validate the model. This combined effort of modeling, simulation, and testing has ledmore » us to build a reliable, efficient preconcentrator with good performance.« less
Merel, Sylvain; Anumol, Tarun; Park, Minkyu; Snyder, Shane A
2015-01-23
In response to water scarcity, strategies relying on multiple processes to turn wastewater effluent into potable water are being increasingly considered by many cities. In such context, the occurrence of contaminants as well as their fate during treatment processes is a major concern. Three analytical approaches where used to characterize the efficacy of UV and UV/H2O2 processes on a secondary wastewater effluent. The first analytical approach assessed bulk organic parameters or surrogates before and after treatment, while the second analytical approach measured the removal of specific indicator compounds. Sixteen trace organic contaminants were selected due to their relative high concentration and detection frequency over eight monitoring campaigns. While their removal rate ranges from approximately 10 to >90%, some of these compounds can be used to gauge process efficacy (or failure). The third analytical approach assessed the fate of unknown contaminants through high-resolution time-of-flight (TOF) mass spectrometry with advanced data processing and demonstrated the occurrence of several thousand organic compounds in the water. A heat map clearly evidenced compounds as recalcitrant or transformed by the UV processes applied. In addition, those chemicals with similar fate were grouped together into clusters to identify new indicator compounds. In this manuscript, each approach is evaluated with advantages and disadvantages compared. Copyright © 2014 Elsevier B.V. All rights reserved.
Merel, Sylvain; Anumol, Tarun; Park, Minkyu; Snyder, Shane A.
2016-01-01
In response to water scarcity, strategies relying on multiple processes to turn wastewater effluent into potable water are being increasingly considered by many cities. In such context, the occurrence of contaminants as well as their fate during treatment processes is a major concern. Three analytical approaches where used to characterize the efficacy of UV and UV/H2O2 processes on a secondary wastewater effluent. The first analytical approach assessed bulk organic parameters or surrogates before and after treatment, while the second analytical approach measured the removal of specific indicator compounds. Sixteen trace organic contaminants were selected due to their relative high concentration and detection frequency over eight monitoring campaigns. While their removal rate ranges from approximately 10 to >90%, some of these compounds can be used to gauge process efficacy (or failure). The third analytical approach assessed the fate of unknown contaminants through high-resolution time-of-flight (TOF) mass spectrometry with advanced data processing and demonstrated the occurrence of several thousand organic compounds in the water. A heat map clearly evidenced compounds as recalcitrant or transformed by the UV processes applied. In addition, those chemicals with similar fate were able to be grouped together into clusters to identify new indicator compounds. In this manuscript, each approach is evaluated with advantages and disadvantages compared. PMID:25262385
Expectation, information processing, and subjective duration.
Simchy-Gross, Rhimmon; Margulis, Elizabeth Hellmuth
2018-01-01
In research on psychological time, it is important to examine the subjective duration of entire stimulus sequences, such as those produced by music (Teki, Frontiers in Neuroscience, 10, 2016). Yet research on the temporal oddball illusion (according to which oddball stimuli seem longer than standard stimuli of the same duration) has examined only the subjective duration of single events contained within sequences, not the subjective duration of sequences themselves. Does the finding that oddballs seem longer than standards translate to entire sequences, such that entire sequences that contain oddballs seem longer than those that do not? Is this potential translation influenced by the mode of information processing-whether people are engaged in direct or indirect temporal processing? Two experiments aimed to answer both questions using different manipulations of information processing. In both experiments, musical sequences either did or did not contain oddballs (auditory sliding tones). To manipulate information processing, we varied the task (Experiment 1), the sequence event structure (Experiments 1 and 2), and the sequence familiarity (Experiment 2) independently within subjects. Overall, in both experiments, the sequences that contained oddballs seemed shorter than those that did not when people were engaged in direct temporal processing, but longer when people were engaged in indirect temporal processing. These findings support the dual-process contingency model of time estimation (Zakay, Attention, Perception & Psychophysics, 54, 656-664, 1993). Theoretical implications for attention-based and memory-based models of time estimation, the pacemaker accumulator and coding efficiency hypotheses of time perception, and dynamic attending theory are discussed.
Developmental changes in analytic and holistic processes in face perception
Joseph, Jane E.; DiBartolo, Michelle D.; Bhatt, Ramesh S.
2015-01-01
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2nd order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2nd order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6–8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9–11 years) showed an intermediate pattern with a trend toward holistic processing of 2nd order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2nd order and featural information are incorporated into holistic representations, whereas older children only incorporate 2nd order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2nd order processing initially then incorporates featural information by adulthood. PMID:26300838
Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.
Basanta-Val, Pablo; Sánchez-Fernández, Luis
2018-06-01
The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.