Néri-Quiroz, José; Canto, Fabrice; Guillerme, Laurent; Couston, Laurent; Magnaldo, Alastair; Dugas, Vincent
2016-10-01
A miniaturized and automated approach for the determination of free acidity in solutions containing uranium (VI) is presented. The measurement technique is based on the concept of sequential injection analysis with on-line spectroscopic detection. The proposed methodology relies on the complexation and alkalimetric titration of nitric acid using a pH 5.6 sodium oxalate solution. The titration process is followed by UV/VIS detection at 650nm thanks to addition of Congo red as universal pH indicator. Mixing sequence as well as method validity was investigated by numerical simulation. This new analytical design allows fast (2.3min), reliable and accurate free acidity determination of low volume samples (10µL) containing uranium/[H(+)] moles ratio of 1:3 with relative standard deviation of <7.0% (n=11). The linearity range of the free nitric acid measurement is excellent up to 2.77molL(-1) with a correlation coefficient (R(2)) of 0.995. The method is specific, presence of actinide ions up to 0.54molL(-1) does not interfere on the determination of free nitric acid. In addition to automation, the developed sequential injection analysis method greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight fold. These analytical parameters are important especially in nuclear-related applications to improve laboratory safety, personnel exposure to radioactive samples and to drastically reduce environmental impacts or analytical radioactive waste. Copyright © 2016 Elsevier B.V. All rights reserved.
Rapid Radiochemical Methods for Selected Radionuclides
The rapid methods documents are supplement guidance in a planned series designed to present radioanalytical laboratory personnel, Incident Commanders (and their designees), and other field response personnel.
MARLAP is being developed as a multi-agency guidance manual for project managers and radioanalytical laboratories. The document uses a performance based approach and will provide guidance and a framework to assure that laboratory radioanalytical data meets the specific project or...
Lv, Kai; Yang, Chu-Ting; Han, Jun; Hu, Sheng; Wang, Xiao-Lin
2017-06-30
Combining the merits of soft-templating and perchlorate oxidation methods, the first-case investigation of niobium alkylphosphonates has uncovered their unique morphology, backbone composition, thermal behavior and huge potentiality as radioanalytical separation materials. These hierarchically porous solids are random aggregates of densely stacked nanolayers perforated with worm-like holes or vesicular voids, manifesting the massif-, tower-like "polymer brush" elevated up to ∼150nm driven by the minimal surface free energy principle. These coordination polymers consist of distorted niobium (V) ions strongly linked with tetrahedral alkylphosphonate building units, exposing uncoordinated phosphonate moieties and defective metal sites. Despite the amorphous features, they demonstrate multimodal porosity covering continuous micropores, segregated mesopores and fractional macropores, beneficial for the sequestration by active Lewis acid-base center. Evidenced by the maximum distribution coefficients of thorium, lanthanides reaching 9.0×10 4 , 9.5×10 4 mLg -1 and large separation factor at pH≤1 20-element cocktail, this category of niobium alkylphosphonates are capable of harvesting thorium, lanthanides directly from the radionuclide surrogate, comparable to or even surpass the performance of the metal (IV) arylphosphonates counterparts. They also display appreciable SF Eu/Sm ∼20 in 1molL -1 HNO 3 , shedding light on dual approaches to achieve the isolation of americium from curium. A combinatorial radioanalytical separation protocol has been proposed to enrich thorium and europium, revealing facile utilization of these highly stable, phosphonated hybrids in sustainable development of radioanalytical separation. Copyright © 2017 Elsevier B.V. All rights reserved.
Currently there are no standard radioanalytical methods applicable to the initial phase of a radiological emergency, for the early identification and quantification of alpha emitting radionuclides. Of particular interest are determinations of the presence and concentration of is...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rim, Jung Ho; Tandon, Lav
This report is a summary of the projects Jung Rim is working on as a DHS postdoctoral fellow at Los Alamos National Laboratory. These research projects are designed to explore different radioanalytical methods to support nuclear forensics applications. The current projects discussed here include development of alpha spectroscopy method for 240/239Pu Isotopic ratio measurement, non-destructive uranium assay method using gamma spectroscopy, and 236U non-destructive uranium analysis using FRAM code. This report documents the work that has been performed since the start of the postdoctoral appointment.
Internal dosimetry monitoring equipment: Present and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selby, J.; Carbaugh, E.H.; Lynch, T.P.
1993-09-01
We have attempted to characterize the current and future status of in vivo and in vitro measurement programs coupled with the associated radioanalytical methods and workplace monitoring. Developments in these areas must be carefully integrated by internal dosimetrists, radiochemists and field health physicists. Their goal should be uniform improvement rather than to focus on one specific area (e.g., dose modeling) to the neglect of other areas where the measurement capabilities are substantially less sophisticated and, therefore, the potential source of error is greatest.
NASA Astrophysics Data System (ADS)
Singare, P. U.
2014-07-01
Radioanalytical technique using 131I and 82Br was employed to evaluate organic based anion exchange resins Tulsion A-30 and Indion-930A. The evaluation was based on performance of these resins during iodide and bromide ion-isotopic exchange reactions. It was observed that for iodide ion-isotopic exchange reaction by using Tulsion A-30 resin, the values of specific reaction rate (min-1), amount of iodide ion exchanged (mmol), initial rate of iodide ion exchange (mmol/min) and log K d were 0.238, 0.477, 0.114, and 11.0, respectively, which was higher than 0.155, 0.360, 0.056, and 7.3, respectively as that obtained by using Indion-930A resins under identical experimental conditions of 40.0°C, 1.000 g of ion exchange resins and 0.003 M labeled iodide ion solution. Also at a constant temperature of 40.0°C, as the concentration of labeled iodide ion solution increases 0.001 to 0.004 M, for Tulsion A-30 resins the percentage of iodide ions exchanged increases from 59.0 to 65.1%, and from 46.4 to 48.8% for Indion-930A resins under identical experimental conditions. The identical trend was observed for both the resins during bromide ion-isotopic exchange reactions. The overall results indicate that under identical experimental conditions, Tulsion A-30 show superior performance over Indion-930A resins. The results of present experimental work have demonstrated that the radioanalytical technique used here can be successfully applied for characterization of different ion exchange resins so as to evaluate their performance under various process parameters.
Determination of chemical forms of 14C in liquid discharges from nuclear power plants.
Svetlik, I; Fejgl, M; Povinec, P P; Kořínková, T; Tomášková, L; Pospíchal, J; Kurfiřt, M; Striegler, R; Kaufmanová, M
2017-10-01
Developments of radioanalytical methods for determination of radiocarbon in wastewaters from nuclear power plants (NPP) with pressurized light water reactors, which would distinguish between the dissolved organic and inorganic forms have been carried out. After preliminary tests, the method was used to process pilot samples from wastewater outlets from the Temelín and Dukovany NPPs (Czech Republic). The results of analysis of pilot water samples collected in 2015 indicate that the instantaneous 14 C releases into the water streams would be about 7.10 -5 (Temelín) and 4.10 -6 (Dukovany) of the total quantity of the 14 C liberated into the environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
RADIOANALYTICAL AND MIXED WASTE ANALYTICAL SUPPORT FOR STATES, REGIONS, AND OTHER FEDERAL AGENCIES
Provide technical advice and support to Regions and other Federal Agencies on types of analyses, proper sampling, preservation, shipping procedures, and detection limits for samples for radionuclides and stable metals. Provide in-house data review and validation to ensure the qua...
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, Matthew J.; Addleman, R. Shane
Radioactive contamination in the environment, be it from accidental or intentional release, can create an urgent need to assess water and food supplies, the environment, and monitor human health. Alpha-emitting radionuclides represent the most ionizing, and therefore the most damaging, form of radiation when internalized. Additionally, because of its ease of energy attenuation in solids or liquids, alpha emissions cannot be reliably monitored using non-destructive means. In the event of such an emergency, rapid and efficient methods will be needed to screen scores of samples (food, water, and human excreta) within a short time window. Unfortunately, the assay of alpha-emittingmore » radionuclides using traditional radioanalytical methods is typically labor intensive and time consuming. The creation of analytical counting sources typically requires a series of chemical treatment steps to achieve well performing counting sources. In an effort to devise radioanalytical methods that are fast, require little labor, and minimize the use of toxic or corrosive agents, researchers at PNNL have evaluated magnetite (Fe3O4) nanoparticles as extracting agents for alpha-emitting radionuclides from chemically unmodified aqueous systems. It is demonstrated that bare magnetic nanoparticles exhibit high affinity for representative α-emitting radionuclides (241Am and 210Po) from representative aqueous matrices: river and ground water. Furthermore, use of the magnetic properties of these materials to concentrate the sorbed analyte from the bulk aqueous solution has been demonstrated. The nanoparticle concentrate can be either directly dispensed into scintillation cocktail, or first dissolved and then added to scintillation cocktail as a solution for alpha emission assay by liquid scintillation analysis. Despite the extreme quench caused by the metal oxide suspensions, the authors have demonstrated that quench correction features available on modern liquid scintillation analyzers can be employed to assure that quench-induced analytical biases can be avoided.« less
[The document describes the likely analytical decision paths that would be made by personnel at a radioanalytical laboratory following a radiological or nuclear incident, such as that caused by a terrorist attack. EPA’s responsibilities, as outlined in the National Response Frame...
Nour, Svetlana; LaRosa, Jerry; Inn, Kenneth G W
2011-08-01
The present challenge for the international emergency radiobioassay community is to analyze contaminated samples rapidly while maintaining high quality results. The National Institute of Standards and Technology (NIST) runs a radiobioassay measurement traceability testing program to evaluate the radioanalytical capabilities of participating laboratories. The NIST Radiochemistry Intercomparison Program (NRIP) started more than 10 years ago, and emergency performance testing was added to the program seven years ago. Radiobioassay turnaround times under the NRIP program for routine production and under emergency response scenarios are 60 d and 8 h, respectively. Because measurement accuracy and sample turnaround time are very critical in a radiological emergency, response laboratories' analytical systems are best evaluated and improved through traceable Performance Testing (PT) programs. The NRIP provides participant laboratories with metrology tools to evaluate their performance and to improve it. The program motivates the laboratories to optimize their methodologies and minimize the turnaround time of their results. Likewise, NIST has to make adjustments and periodical changes in the bioassay test samples in order to challenge the participating laboratories continually. With practice, radioanalytical measurements turnaround time can be reduced to 3-4 h.
Applications of inductively coupled plasma-mass spectrometry in environmental radiochemistry
Grain, J.S.
1996-01-01
The state of the art in ICP-MS is now such that there are few discernible differences between radiochemical and mass spectrometric determinations of longlived radionuclides. Indeed, ICP-MS may provide better (more sensitive) data for many radionuclides, depending upon how one wishes to define "long-lived." In lowlevel determinations, sample preparation remains an important part of the analytical procedure, even with ICP-MS, but the speed and isotopic selectivity of the mass spectrometer appear to offer distinct procedural advantages over radiochemical techniques. Therefore, "radioanalytical" ICP-MS applications should continue to grow, especially in the area of radiation protection, but further research (on efficient sample introduction, for example) and method development may be required to get ICP-MS "off the ground" in the geochemical research areas that have traditionally been supported by radiochemistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkhurst, MaryAnn; Cheng, Yung-Sung; Kenoyer, Judson L.
2009-03-01
During the Capstone Depleted Uranium (DU) Aerosol Study, aerosols containing depleted uranium were produced inside unventilated armored vehicles (i.e., Abrams tanks and Bradley Fighting Vehicles) by perforation with large-caliber DU penetrators. These aerosols were collected and characterized, and the data were subsequently used to assess human health risks to personnel exposed to DU aerosols. The DU content of each aerosol sample was first quantified by radioanalytical methods, and selected samples, primarily those from the cyclone separator grit chambers, were analyzed radiochemically. Deposition occurred inside the vehicles as particles settled on interior surfaces. Settling rates of uranium from the aerosols weremore » evaluated using filter cassette samples that collected aerosol as total mass over eight sequential time intervals. A moving filter was used to collect aerosol samples over time particularly within the first minute after the shot. The results demonstrate that the peak uranium concentration in the aerosol occurred in the first 10 s, and the concentration decreased in the Abrams tank shots to about 50% within 1 min and to less than 2% 30 min after perforation. In the Bradley vehicle, the initial (and maximum) uranium concentration was lower than those observed in the Abrams tank and decreased more slowly. Uranium mass concentrations in the aerosols as a function of particle size were evaluated using samples collected in the cyclone samplers, which collected aerosol continuously for 2 h post perforation. The percentages of uranium mass in the cyclone separator stages from the Abrams tank tests ranged from 38% to 72% and, in most cases, varied with particle size, typically with less uranium associated with the smaller particle sizes. Results with the Bradley vehicle ranged from 18% to 29% and were not specifically correlated with particle size.« less
Parkhurst, Mary Ann; Cheng, Yung Sung; Kenoyer, Judson L; Traub, Richard J
2009-03-01
During the Capstone Depleted Uranium (DU) Aerosol Study, aerosols containing DU were produced inside unventilated armored vehicles (i.e., Abrams tanks and Bradley Fighting Vehicles) by perforation with large-caliber DU penetrators. These aerosols were collected and characterized, and the data were subsequently used to assess human health risks to personnel exposed to DU aerosols. The DU content of each aerosol sample was first quantified by radioanalytical methods, and selected samples, primarily those from the cyclone separator grit chambers, were analyzed radiochemically. Deposition occurred inside the vehicles as particles settled on interior surfaces. Settling rates of uranium from the aerosols were evaluated using filter cassette samples that collected aerosol as total mass over eight sequential time intervals. A moving filter was used to collect aerosol samples over time, particularly within the first minute after a shot. The results demonstrate that the peak uranium concentration in the aerosol occurred in the first 10 s after perforation, and the concentration decreased in the Abrams tank shots to about 50% within 1 min and to less than 2% after 30 min. The initial and maximum uranium concentrations were lower in the Bradley vehicle than those observed in the Abrams tank, and the concentration levels decreased more slowly. Uranium mass concentrations in the aerosols as a function of particle size were evaluated using samples collected in a cyclone sampler, which collected aerosol continuously for 2 h after perforation. The percentages of uranium mass in the cyclone separator stages ranged from 38 to 72% for the Abrams tank with conventional armor. In most cases, it varied with particle size, typically with less uranium associated with the smaller particle sizes. Neither the Abrams tank with DU armor nor the Bradley vehicle results were specifically correlated with particle size and can best be represented by their average uranium mass concentrations of 65 and 24%, respectively.
Status Survey of Bunkers 738A and 825A at Spangdahlem AB, Germany
2015-10-02
Germany 1. EXECUTIVE SUMMARY: At the request ofthe United States Air Force Radioisotope Committee Secretariat, the United States Air Force School of...solely for the purpose of the person to whom it is addressed. If received in error, please notify the Program Manager listed above. 34 Distribution...45433 - (937) 938-2523 Radiation Qualifier List Radioanalytical Services Laboratory Qualifier Qualifier Description A Identification Rejected B (ELAP
NASA Astrophysics Data System (ADS)
Byrne, A. R.; Benedik, L.
1999-01-01
Neutron activation analysis (NAA), being essentially an isotopic and not an elemental method of analysis, is capable of determining a number of important radionuclides of radioecological interest by transformation into another, more easily quantifiable radionuclide. The nuclear characteristics which favour this technique may be summarized in an advantage factor relative to radiometric analysis of the original radioanalyte. Well known or hardly known examples include235U,238U,232Th,230Th,129I,99Tc,237Np and231Pa; a number of these are discussed and illustrated in analysis of real samples of environmental and biological origin. In particular, determination of231Pa by RNAA was performed using both postirradiation and preseparation methods. Application of INAA to enable the use of238U and232Th as endogenous (internal) radiotracers in alpha spectrometric analyses of uranium and thorium radioisotopes in radioecological studies is described, also allowing independent data sets to be obtained for quality control.
Varga, Zsolt
2007-03-28
An improved and novel sample preparation method for (241)Am analysis by inductively coupled plasma sector field mass spectrometry has been developed. The procedure involves a selective CaF(2) pre-concentration followed by an extraction chromatographic separation using TRU resin. The achieved absolute detection limit of 0.86 fg (0.11 mBq) is comparable to that of alpha spectrometry (0.1 mBq) and suitable for low-level environmental measurements. Analysis of different kinds of environmental standard reference materials (IAEA-384--Fangataufa lagoon sediment, IAEA-385--Irish Sea sediment and IAEA-308--Mixed seaweed from the Mediterranean Sea) and alpha spectrometry were used to validate the procedure. The chemical recovery of sample preparation ranged between 72 and 94%. The results obtained are in good agreement with reference values and those measured by alpha spectrometry. The proposed method offers a rapid and less labor-intensive possibility for environmental (241)Am analysis than the conventionally applied radioanalytical techniques.
Rapid Radiochemical Methods for Asphalt Paving Material ...
Technical Brief Validated rapid radiochemical methods for alpha and beta emitters in solid matrices that are commonly encountered in urban environments were previously unavailable for public use by responding laboratories. A lack of tested rapid methods would delay the quick determination of contamination levels and the assessment of acceptable site-specific exposure levels. Of special concern are matrices with rough and porous surfaces, which allow the movement of radioactive material deep into the building material making it difficult to detect. This research focuses on methods that address preparation, radiochemical separation, and analysis of asphalt paving materials and asphalt roofing shingles. These matrices, common to outdoor environments, challenge the capability and capacity of very experienced radiochemistry laboratories. Generally, routine sample preparation and dissolution techniques produce liquid samples (representative of the original sample material) that can be processed using available radiochemical methods. The asphalt materials are especially difficult because they do not readily lend themselves to these routine sample preparation and dissolution techniques. The HSRP and ORIA coordinate radiological reference laboratory priorities and activities in conjunction with HSRP’s Partner Process. As part of the collaboration, the HSRP worked with ORIA to publish rapid radioanalytical methods for selected radionuclides in building material matrice
Characterization Results for the January 2017 H-Tank Farm 2H Evaporator Overhead Sample
DOE Office of Scientific and Technical Information (OSTI.GOV)
Truong, T.; Nicholson, J.
2017-04-11
This report contains the radioanalytical results of the 2H evaporator overhead sample received at SRNL on January 19, 2017. Specifically, concentrations of 137Cs, 90Sr, and 129I are reported and compared to the corresponding Waste Acceptance Criteria (WAC) limits of the Effluent Treatment Project (ETP) Waste Water Collection Tank (WWCT) (rev. 6). All of the radionuclide concentrations in the sample were found to be in compliance with the ETP WAC limits.
Characterization results for the October 2015-Tank for farm 3H evaporator overhead examples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, J. C.
2016-01-28
This report contains the radioanalytical results of the 3H evaporator overhead sample received at SRNL on October 13, 2015. Specifically, concentrations of 137Cs, 90Sr, and 129I are reported and compared to the corresponding Waste Acceptance Criteria (WAC) limits of the Effluent Treatment Project (ETP) Waste Water Collection Tank (WWCT) (rev. 6). All of the radionuclide concentrations in the sample were found to be in compliance with the ETP WAC limits.
Characterization Results for the March 2016 H-Tank Farm 2H Evaporator Overhead Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, J. C.
2016-05-09
This report contains the radioanalytical results of the 2H evaporator overhead sample received at SRNL on March 16, 2016. Specifically, concentrations of 137Cs, 90Sr, and 129I are reported and compared to the corresponding Waste Acceptance Criteria (WAC) limits of the Effluent Treatment Project (ETP) Waste Water Collection Tank (WWCT) (rev. 6). All of the radionuclide concentrations in the sample were found to be in compliance with the ETP WAC limits.
Shim, Ha Eun; Lee, Jae Young; Lee, Chang Heon; Mushtaq, Sajid; Song, Ha Yeon; Song, Lee; Choi, Seong-Jin; Lee, Kyuhong; Jeon, Jongho
2018-05-25
To assess the risk posed by a toxic chemical to human health, it is essential to quantify its uptake in a living subject. This study aims to investigate the biological distribution of inhaled polyhexamethylene guanidine (PHMG) aerosol particle, which is known to cause severe pulmonary damage. By labeling with indium-111 ( 111 In), we quantified the uptake of PHMG for up to 7 days after inhalation exposure in rats. The data demonstrate that PHMG is only slowly cleared, with approximately 74% of inhaled particles persisting in the lungs after 168 h. Approximately 5.3% of inhaled particles were also translocated to the liver after 168 h, although the level of redistribution to other tissues, including the kidneys and spleen, was minimal. These observations suggest that large uptake and slow clearance may underlie the fatal inhalation toxicity of PHMG in humans. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Novel Method for Discovering Fuzzy Sequential Patterns Using the Simple Fuzzy Partition Method.
ERIC Educational Resources Information Center
Chen, Ruey-Shun; Hu, Yi-Chung
2003-01-01
Discusses sequential patterns, data mining, knowledge acquisition, and fuzzy sequential patterns described by natural language. Proposes a fuzzy data mining technique to discover fuzzy sequential patterns by using the simple partition method which allows the linguistic interpretation of each fuzzy set to be easily obtained. (Author/LRW)
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
External Performance Evaluation Program Participation at Fluor Hanford (FH) 222S Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
CLARK, G.A.
2002-06-01
Fluor Hanford operates the U. S. Department of Energy's (DOE) 2224 Laboratory on the Hanford Site in Southeastern Washington State. 222-S Laboratory recently celebrated its 50th anniversary of providing laboratory services to DOE and DOE contractors on the Hanford Site. The laboratory operated for many years as a production support analytical laboratory, but in the last two decades has supported the Hanford Site cleanup mission. The laboratory performs radioanalytical, inorganic, and organic characterization analyses on highly radioactive liquid and solid tank waste that will eventually be vitrified for long-term storage and or disposal. It is essential that the laboratory reportmore » defensible, highly credible data in its role as a service provider to DOE and DOE contractors. Among other things, the participation in a number of performance evaluation (PE) programs helps to ensure the credibility of the laboratory. The laboratory currently participates in Environmental Resource Associates' Water Pollution (WP) Studies and the DOE Environmental Management Laboratory (EML) Quality Assessment Program (QAP). DOE has mandated participation of the laboratory in the EML QAP. This EML program evaluates the competence of laboratories performing environmental radioanalytical measurements for DOE, and is the most comprehensive and well-established PE program in the DOE community for radiochemical laboratories. Samples are received and analyzed for radionuclides in air filter, soil, vegetation, and water matrices on a semiannual basis. The 222-S Laboratory has performed well in this program over the years as evidenced by the scores in the chart below.« less
Trial Sequential Methods for Meta-Analysis
ERIC Educational Resources Information Center
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
ERIC Educational Resources Information Center
Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse
2015-01-01
The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
NASA Technical Reports Server (NTRS)
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Yu, Yang; Zhang, Fan; Gao, Ming-Xin; Li, Hai-Tao; Li, Jing-Xing; Song, Wei; Huang, Xin-Sheng; Gu, Cheng-Xiong
2013-01-01
OBJECTIVES Intraoperative transit time flow measurement (TTFM) is widely used to assess anastomotic quality in coronary artery bypass grafting (CABG). However, in sequential vein grafting, the flow characteristics collected by the conventional TTFM method are usually associated with total graft flow and might not accurately indicate the quality of every distal anastomosis in a sequential graft. The purpose of our study was to examine a new TTFM method that could assess the quality of each distal anastomosis in a sequential graft more reliably than the conventional TTFM approach. METHODS Two TTFM methods were tested in 84 patients who underwent sequential saphenous off-pump CABG in Beijing An Zhen Hospital between April and August 2012. In the conventional TTFM method, normal blood flow in the sequential graft was maintained during the measurement, and the flow probe was placed a few centimetres above the anastomosis to be evaluated. In the new method, blood flow in the sequential graft was temporarily reduced during the measurement by placing an atraumatic bulldog clamp at the graft a few centimetres distal to the anastomosis to be evaluated, while the position of the flow probe remained the same as in the conventional method. This new TTFM method was named the flow reduction TTFM. Graft flow parameters measured by both methods were compared. RESULTS Compared with the conventional TTFM, the flow reduction TTFM resulted in significantly lower mean graft blood flow (P < 0.05); in contrast, yielded significantly higher pulsatility index (P < 0.05). Diastolic filling was not significantly different between the two methods and was >50% in both cases. Interestingly, the flow reduction TTFM identified two defective middle distal anastomoses that the conventional TTFM failed to detect. Graft flows near the defective distal anastomoses were improved substantially after revision. CONCLUSIONS In this study, we found that temporary reduction of graft flow during TTFM seemed to enhance the sensitivity of TTFM to less-than-critical anastomotic defects in a sequential graft and to improve the overall accuracy of the intraoperative assessment of anastomotic quality in sequential vein grafting. PMID:24000314
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.
Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R
2017-07-01
The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.
Delay test generation for synchronous sequential circuits
NASA Astrophysics Data System (ADS)
Devadas, Srinivas
1989-05-01
We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwantes, J. M.; Marsden, O.; Reilly, D.
Abstract The Nuclear Forensics International Technical Working Group is a community of nuclear forensic practitioners who respond to incidents involving nuclear and other radioactive material out of regulatory control. The Group is dedicated to advancing nuclear forensic science in part through periodic participation in materials exercises. The Group completed its fourth Collaborative Materials Exercise in 2015 in which laboratories from 15 countries and one multinational organization analyzed three samples of special nuclear material in support of a mock nuclear forensic investigation. This special section of the Journal for Radioanalytical and Nuclear Chemistry is devoted to summarizing highlights from this exercise.
Converter target chemistry - A new challenge to radioanalytical chemistry.
Choudhury, Dibyasree; Lahiri, Susanta
2018-07-01
The 1-2 GeV proton induced spallation reaction on the high Z materials like Hg, or lead bismuth eutectic (LBE), popularly known as converter targets, will produce strong flux of fast neutrons which would further react with fissile materials to produce intense radioactive ion beam (RIB). LBE offers suitability for use as converters over Hg but it suffers from the demerit of radiotoxic polonium production. These targets may be viewed as a store house of clinically important and other exotic radionuclides. For application of those radionuclides, radiochemical separation from bulk target material is of utmost importance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sequential Testing: Basics and Benefits
1978-03-01
Eii~TARADC6M and x _..TECHNICAL REPORT NO. 12325 SEQUENTIAL TESTING: BASICS AND BENEFITS / i * p iREFERENCE CP...Sequential Testing: Basics and Benefits Contents Page I. Introduction and Summary II. Sequential Analysis 2 III. Mathematics of Sequential Testing 4 IV...testing. The added benefit of reduced energy needs are inherent in this testing method. The text was originally released by the authors in 1972. The text
40 CFR 53.34 - Test procedure for methods for PM10 and Class I methods for PM2.5.
Code of Federal Regulations, 2010 CFR
2010-07-01
... simultaneous PM10 or PM2.5 measurements as necessary (see table C-4 of this subpart), each set consisting of...) in appendix A to this subpart). (f) Sequential samplers. For sequential samplers, the sampler shall be configured for the maximum number of sequential samples and shall be set for automatic collection...
Finding False Paths in Sequential Circuits
NASA Astrophysics Data System (ADS)
Matrosova, A. Yu.; Andreeva, V. V.; Chernyshov, S. V.; Rozhkova, S. V.; Kudin, D. V.
2018-02-01
Method of finding false paths in sequential circuits is developed. In contrast with heuristic approaches currently used abroad, the precise method based on applying operations on Reduced Ordered Binary Decision Diagrams (ROBDDs) extracted from the combinational part of a sequential controlling logic circuit is suggested. The method allows finding false paths when transfer sequence length is not more than the given value and obviates the necessity of investigation of combinational circuit equivalents of the given lengths. The possibilities of using of the developed method for more complicated circuits are discussed.
SPMBR: a scalable algorithm for mining sequential patterns based on bitmaps
NASA Astrophysics Data System (ADS)
Xu, Xiwei; Zhang, Changhai
2013-12-01
Now some sequential patterns mining algorithms generate too many candidate sequences, and increase the processing cost of support counting. Therefore, we present an effective and scalable algorithm called SPMBR (Sequential Patterns Mining based on Bitmap Representation) to solve the problem of mining the sequential patterns for large databases. Our method differs from previous related works of mining sequential patterns. The main difference is that the database of sequential patterns is represented by bitmaps, and a simplified bitmap structure is presented firstly. In this paper, First the algorithm generate candidate sequences by SE(Sequence Extension) and IE(Item Extension), and then obtain all frequent sequences by comparing the original bitmap and the extended item bitmap .This method could simplify the problem of mining the sequential patterns and avoid the high processing cost of support counting. Both theories and experiments indicate that the performance of SPMBR is predominant for large transaction databases, the required memory size for storing temporal data is much less during mining process, and all sequential patterns can be mined with feasibility.
Hajati, Omid; Zarrabi, Khalil; Karimi, Reza; Hajati, Azadeh
2012-01-01
There is still controversy over the differences in the patency rates of the sequential and individual coronary artery bypass grafting (CABG) techniques. The purpose of this paper was to non-invasively evaluate hemodynamic parameters using complete 3D computational fluid dynamics (CFD) simulations of the sequential and the individual methods based on the patient-specific data extracted from computed tomography (CT) angiography. For CFD analysis, the geometric model of coronary arteries was reconstructed using an ECG-gated 64-detector row CT. Modeling the sequential and individual bypass grafting, this study simulates the flow from the aorta to the occluded posterior descending artery (PDA) and the posterior left ventricle (PLV) vessel with six coronary branches based on the physiologically measured inlet flow as the boundary condition. The maximum calculated wall shear stress (WSS) in the sequential and the individual models were estimated to be 35.1 N/m(2) and 36.5 N/m(2), respectively. Compared to the individual bypass method, the sequential graft has shown a higher velocity at the proximal segment and lower spatial wall shear stress gradient (SWSSG) due to the flow splitting caused by the side-to-side anastomosis. Simulated results combined with its surgical benefits including the requirement of shorter vein length and fewer anastomoses advocate the sequential method as a more favorable CABG method.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Cost-benefit analysis of sequential warning lights in nighttime work zone tapers.
DOT National Transportation Integrated Search
2011-06-01
Improving safety at nighttime work zones is important because of the extra visibility concerns. The deployment of sequential lights is an innovative method for improving driver recognition of lane closures and work zone tapers. Sequential lights are ...
Ion Mobility Spectrometer Field Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Nicholas; McLain, Derek; Steeb, Jennifer
The Morpho Saffran Itemizer 4DX Ion Mobility Spectrometer previously used to detect uranium signatures in FY16 was used at the former New Brunswick Facility, a past uranium facility located on site at Argonne National Laboratory. This facility was chosen in an attempt to detect safeguards relevant signatures and has a history of processing uranium at various enrichments, chemical forms, and purities; various chemicals such as nitric acid, uranium fluorides, phosphates and metals are present at various levels. Several laboratories were sampled for signatures of nuclear activities around the laboratory. All of the surfaces that were surveyed were below background levelsmore » of the radioanalytical instrumentation and determined to be radiologically clean.« less
Test pattern generation for ILA sequential circuits
NASA Technical Reports Server (NTRS)
Feng, YU; Frenzel, James F.; Maki, Gary K.
1993-01-01
An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.
Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1992-01-01
TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Evaluation Using Sequential Trials Methods.
ERIC Educational Resources Information Center
Cohen, Mark E.; Ralls, Stephen A.
1986-01-01
Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)
Blocking for Sequential Political Experiments
Moore, Sally A.
2013-01-01
In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061
ERIC Educational Resources Information Center
Ivankova, Nataliya V.
2014-01-01
In spite of recent methodological developments related to quality assurance in mixed methods research, practical examples of how to implement quality criteria in designing and conducting sequential QUAN [right arrow] QUAL mixed methods studies to ensure the process is systematic and rigorous remain scarce. This article discusses a three-step…
Ashley, Kevin; Applegate, Gregory T; Marcy, A Dale; Drake, Pamela L; Pierce, Paul A; Carabin, Nathalie; Demange, Martine
2009-02-01
Because toxicities may differ for Cr(VI) compounds of varying solubility, some countries and organizations have promulgated different occupational exposure limits (OELs) for soluble and insoluble hexavalent chromium (Cr(VI)) compounds, and analytical methods are needed to determine these species in workplace air samples. To address this need, international standard methods ASTM D6832 and ISO 16740 have been published that describe sequential extraction techniques for soluble and insoluble Cr(VI) in samples collected from occupational settings. However, no published performance data were previously available for these Cr(VI) sequential extraction procedures. In this work, the sequential extraction methods outlined in the relevant international standards were investigated. The procedures tested involved the use of either deionized water or an ammonium sulfate/ammonium hydroxide buffer solution to target soluble Cr(VI) species. This was followed by extraction in a sodium carbonate/sodium hydroxide buffer solution to dissolve insoluble Cr(VI) compounds. Three-step sequential extraction with (1) water, (2) sulfate buffer and (3) carbonate buffer was also investigated. Sequential extractions were carried out on spiked samples of soluble, sparingly soluble and insoluble Cr(VI) compounds, and analyses were then generally carried out by using the diphenylcarbazide method. Similar experiments were performed on paint pigment samples and on airborne particulate filter samples collected from stainless steel welding. Potential interferences from soluble and insoluble Cr(III) compounds, as well as from Fe(II), were investigated. Interferences from Cr(III) species were generally absent, while the presence of Fe(II) resulted in low Cr(VI) recoveries. Two-step sequential extraction of spiked samples with (first) either water or sulfate buffer, and then carbonate buffer, yielded quantitative recoveries of soluble Cr(VI) and insoluble Cr(VI), respectively. Three-step sequential extraction gave excessively high recoveries of soluble Cr(VI), low recoveries of sparingly soluble Cr(VI), and quantitative recoveries of insoluble Cr(VI). Experiments on paint pigment samples using two-step extraction with water and carbonate buffer yielded varying percentages of relative fractions of soluble and insoluble Cr(VI). Sequential extractions of stainless steel welding fume air filter samples demonstrated the predominance of soluble Cr(VI) compounds in such samples. The performance data obtained in this work support the Cr(VI) sequential extraction procedures described in the international standards.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoli; Zou, Jie; Le, Daniel X.; Thoma, George
2010-01-01
"Investigator Names" is a newly required field in MEDLINE citations. It consists of personal names listed as members of corporate organizations in an article. Extracting investigator names automatically is necessary because of the increasing volume of articles reporting collaborative biomedical research in which a large number of investigators participate. In this paper, we present an SVM-based stacked sequential learning method in a novel application - recognizing named entities such as the first and last names of investigators from online medical journal articles. Stacked sequential learning is a meta-learning algorithm which can boost any base learner. It exploits contextual information by adding the predicted labels of the surrounding tokens as features. We apply this method to tag words in text paragraphs containing investigator names, and demonstrate that stacked sequential learning improves the performance of a nonsequential base learner such as an SVM classifier.
ERIC Educational Resources Information Center
Lee, Seong-Soo
1982-01-01
Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…
Hattori, Yoshiyuki; Arai, Shohei; Kikuchi, Takuto; Ozaki, Kei-Ichi; Kawano, Kumi; Yonemochi, Etsuo
2016-04-01
Previously, we developed a novel siRNA transfer method to the liver by sequential intravenous injection of anionic polymer and cationic liposome/siRNA complex (cationic lipoplex). In this study, we investigated whether siRNA delivered by this sequential injection could significantly suppress mRNA expression of the targeted gene in liver metastasis and inhibit tumor growth. When cationic lipoplex was intravenously injected into mice bearing liver metastasis of human breast tumor MCF-7 at 1 min after intravenous injection of chondroitin sulfate C (CS) or poly-l-glutamic acid (PGA), siRNA was accumulated in tumor-metastasized liver. In terms of a gene silencing effect, sequential injections of CS or PGA plus cationic lipoplex of luciferase siRNA could reduce luciferase activity in liver MCF-7-Luc metastasis. Regarding the side effects, sequential injections of CS plus cationic lipoplex did not exhibit hepatic damage or induction of inflammatory cytokines in serum after repeated injections, but sequential injections of PGA plus cationic lipoplex did. Finally, sequential injections of CS plus cationic lipoplex of protein kinase N3 siRNA could suppress tumor growth in the mice bearing liver metastasis. From these findings, sequential injection of CS and cationic lipoplex of siRNA might be a novel systemic method of delivering siRNA to liver metastasis.
Environmental impact of fertilizer industries evaluated by PIXE
NASA Astrophysics Data System (ADS)
Martín, J. E.; Bolívar, J. P.; Respaldiza, M. A.; García-Tenorio, R.; da Silva, M. F.
1995-12-01
In this paper the environmental impact of several phosphogypsum piles sited in the southwest of Spain is studied using multielemental analysis by PIXE of 12 salt marsh and soil samples collected in their surroundings. The piles are used to store the main by-product formed in the production of phosphoric acid and phosphate fertilizers. The samples collected were bombarded with 2.5 MeV protons from the 3 MV Van de Graaff accelerator in the ITN at Sacavèm (Portugal), and 20 elements from Al to Pb were detected. The results obtained reinforce previous radioanalytical determinations concerning the significant radioactive contamination pathways (leaching or/and dissolution of elements by water from the piles) and the negligible pathways (atmospheric and direct aquatic transport).
EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.
Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah
2017-12-01
To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang
2018-03-27
Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.
Zeelenberg, René; Pecher, Diane
2015-03-01
Counterbalanced designs are frequently used in the behavioral sciences. Studies often counterbalance either the order in which conditions are presented in the experiment or the assignment of stimulus materials to conditions. Occasionally, researchers need to simultaneously counterbalance both condition order and stimulus assignment to conditions. Lewis (1989; Behavior Research Methods, Instruments, & Computers 25:414-415, 1993) presented a method for constructing Latin squares that fulfill these requirements. The resulting Latin squares counterbalance immediate sequential effects, but not remote sequential effects. Here, we present a new method for generating Latin squares that simultaneously counterbalance both immediate and remote sequential effects and assignment of stimuli to conditions. An Appendix is provided to facilitate implementation of these Latin square designs.
Dantas, B M; Dantas, A L A; Acar, M E D; Cardoso, J C S; Julião, L M Q C; Lima, M F; Taddei, M H T; Arine, D R; Alonso, T; Ramos, M A P; Fajgelj, A
2011-03-01
In recent years, Brazilian Nuclear Programme has been reviewed and updated by government authorities in face of the demand for energy supply and its associated environmental constraints. The immediate impact of new national programmes and projects in nuclear field is the increase in the number of exposed personnel and the consequent need for reliable dosimetry services in the country. Several Technical Documents related to internal dosimetry have been released by the International Atomic Energy Agency and International Commission on Radiological Protection. However, standard bioassay procedures and methodologies for bioassay data interpretation are still under discussion and, in some cases, both in routine and emergency internal monitoring, procedures can vary from one laboratory to another and responses may differ markedly among Dosimetry Laboratories. Thus, it may be difficult to interpret and use bioassay data generated from different laboratories of a network. The main goal of this work is to implement a National Network of Laboratories aimed to provide reliable internal monitoring services in Brazil. The establishment of harmonised in vivo and in vitro radioanalytical techniques, dose assessment methods and the implementation of the ISO/IEC 17025 requirements will result in the recognition of technical competence of the network.
Analyses of group sequential clinical trials.
Koepcke, W
1989-12-01
In the first part of this article the methodology of group sequential plans is reviewed. After introducing the basic definition of such plans the main properties are shown. At the end of this section three different plans (Pocock, O'Brien-Fleming, Koepcke) are compared. In the second part of the article some unresolved issues and recent developments in the application of group sequential methods to long-term controlled clinical trials are discussed. These include deviation from the assumptions, life table methods, multiple-arm clinical trials, multiple outcome measures, and confidence intervals.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Chung, Sukhoon; Rhee, Hyunsill; Suh, Yongmoo
2010-01-01
Objectives This study sought to find answers to the following questions: 1) Can we predict whether a patient will revisit a healthcare center? 2) Can we anticipate diseases of patients who revisit the center? Methods For the first question, we applied 5 classification algorithms (decision tree, artificial neural network, logistic regression, Bayesian networks, and Naïve Bayes) and the stacking-bagging method for building classification models. To solve the second question, we performed sequential pattern analysis. Results We determined: 1) In general, the most influential variables which impact whether a patient of a public healthcare center will revisit it or not are personal burden, insurance bill, period of prescription, age, systolic pressure, name of disease, and postal code. 2) The best plain classification model is dependent on the dataset. 3) Based on average of classification accuracy, the proposed stacking-bagging method outperformed all traditional classification models and our sequential pattern analysis revealed 16 sequential patterns. Conclusions Classification models and sequential patterns can help public healthcare centers plan and implement healthcare service programs and businesses that are more appropriate to local residents, encouraging them to revisit public health centers. PMID:21818426
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
Effective Identification of Similar Patients Through Sequential Matching over ICD Code Embedding.
Nguyen, Dang; Luo, Wei; Venkatesh, Svetha; Phung, Dinh
2018-04-11
Evidence-based medicine often involves the identification of patients with similar conditions, which are often captured in ICD (International Classification of Diseases (World Health Organization 2013)) code sequences. With no satisfying prior solutions for matching ICD-10 code sequences, this paper presents a method which effectively captures the clinical similarity among routine patients who have multiple comorbidities and complex care needs. Our method leverages the recent progress in representation learning of individual ICD-10 codes, and it explicitly uses the sequential order of codes for matching. Empirical evaluation on a state-wide cancer data collection shows that our proposed method achieves significantly higher matching performance compared with state-of-the-art methods ignoring the sequential order. Our method better identifies similar patients in a number of clinical outcomes including readmission and mortality outlook. Although this paper focuses on ICD-10 diagnosis code sequences, our method can be adapted to work with other codified sequence data.
Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation
NASA Astrophysics Data System (ADS)
Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.
2018-03-01
A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.
Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua
2016-01-01
Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis
Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B
2011-01-01
Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723
Constrained optimization of sequentially generated entangled multiqubit states
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique
2009-08-01
We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.
Sequential causal inference: Application to randomized trials of adaptive treatment strategies
Dawson, Ree; Lavori, Philip W.
2009-01-01
SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714
ERIC Educational Resources Information Center
Yuvaci, Ibrahim; Demir, Selçuk Besir
2016-01-01
This paper is aimed to determine the relation between reading comprehension skill and TEOG success. In this research, a mixed research method, sequential explanatory mixed design, is utilized to examine the relation between reading comprehension skills and TEOG success of 8th grade students throughly. In explanatory sequential mixed design…
Xu, Yan; Wu, Qian; Shimatani, Yuji; Yamaguchi, Koji
2015-10-07
Due to the lack of regeneration methods, the reusability of nanofluidic chips is a significant technical challenge impeding the efficient and economic promotion of both fundamental research and practical applications on nanofluidics. Herein, a simple method for the total regeneration of glass nanofluidic chips was described. The method consists of sequential thermal treatment with six well-designed steps, which correspond to four sequential thermal and thermochemical decomposition processes, namely, dehydration, high-temperature redox chemical reaction, high-temperature gasification, and cooling. The method enabled the total regeneration of typical 'dead' glass nanofluidic chips by eliminating physically clogged nanoparticles in the nanochannels, removing chemically reacted organic matter on the glass surface and regenerating permanent functional surfaces of dissimilar materials localized in the nanochannels. The method provides a technical solution to significantly improve the reusability of glass nanofluidic chips and will be useful for the promotion and acceleration of research and applications on nanofluidics.
Sequential Pattern Analysis: Method and Application in Exploring How Students Develop Concept Maps
ERIC Educational Resources Information Center
Chiu, Chiung-Hui; Lin, Chien-Liang
2012-01-01
Concept mapping is a technique that represents knowledge in graphs. It has been widely adopted in science education and cognitive psychology to aid learning and assessment. To realize the sequential manner in which students develop concept maps, most research relies upon human-dependent, qualitative approaches. This article proposes a method for…
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Optimum target sizes for a sequential sawing process
H. Dean Claxton
1972-01-01
A method for solving a class of problems in random sequential processes is presented. Sawing cedar pencil blocks is used to illustrate the method. Equations are developed for the function representing loss from improper sizing of blocks. A weighted over-all distribution for sawing and drying operations is developed and graphed. Loss minimizing changes in the control...
USDA-ARS?s Scientific Manuscript database
We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...
Characterization Results for the March 2016 H-Tank Farm 2H Evaporator Overhead Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, J. C.
This report contains the radioanalytical results of the 2H evaporator overhead sample received at SRNL on March 16, 2016. Specifically, concentrations of 137Cs, 90Sr, and 129I are reported and compared to the corresponding Waste Acceptance Criteria (WAC) limits of the Effluent Treatment Project (ETP) Waste Water Collection Tank (WWCT) (rev. 6). All of the radionuclide concentrations in the sample were found to be in compliance with the ETP WAC limits. Revision 1 of this document corrects the cumulative beta count initially reported for 90Sr content with the sole 90Sr count obtained after recharacterization of the sample. The initial data wasmore » found to be a cumulative beta count rather than the 90Sr count requested.« less
Optimization of the gypsum-based materials by the sequential simplex method
NASA Astrophysics Data System (ADS)
Doleželová, Magdalena; Vimmrová, Alena
2017-11-01
The application of the sequential simplex optimization method for the design of gypsum based materials is described. The principles of simplex method are explained and several examples of the method usage for the optimization of lightweight gypsum and ternary gypsum based materials are given. By this method lightweight gypsum based materials with desired properties and ternary gypsum based material with higher strength (16 MPa) were successfully developed. Simplex method is a useful tool for optimizing of gypsum based materials, but the objective of the optimization has to be formulated appropriately.
2017-01-01
Objective Anticipation of opponent actions, through the use of advanced (i.e., pre-event) kinematic information, can be trained using video-based temporal occlusion. Typically, this involves isolated opponent skills/shots presented as trials in a random order. However, two different areas of research concerning representative task design and contextual (non-kinematic) information, suggest this structure of practice restricts expert performance. The aim of this study was to examine the effect of a sequential structure of practice during video-based training of anticipatory behavior in tennis, as well as the transfer of these skills to the performance environment. Methods In a pre-practice-retention-transfer design, participants viewed life-sized video of tennis rallies across practice in either a sequential order (sequential group), in which participants were exposed to opponent skills/shots in the order they occur in the sport, or a non-sequential (non-sequential group) random order. Results In the video-based retention test, the sequential group was significantly more accurate in their anticipatory judgments when the retention condition replicated the sequential structure compared to the non-sequential group. In the non-sequential retention condition, the non-sequential group was more accurate than the sequential group. In the field-based transfer test, overall decision time was significantly faster in the sequential group compared to the non-sequential group. Conclusion Findings highlight the benefits of a sequential structure of practice for the transfer of anticipatory behavior in tennis. We discuss the role of contextual information, and the importance of representative task design, for the testing and training of perceptual-cognitive skills in sport. PMID:28355263
Saito, Shota; Hirata, Yoshito; Sasahara, Kazutoshi; Suzuki, Hideyuki
2015-01-01
Micro-blogging services, such as Twitter, offer opportunities to analyse user behaviour. Discovering and distinguishing behavioural patterns in micro-blogging services is valuable. However, it is difficult and challenging to distinguish users, and to track the temporal development of collective attention within distinct user groups in Twitter. In this paper, we formulate this problem as tracking matrices decomposed by Nonnegative Matrix Factorisation for time-sequential matrix data, and propose a novel extension of Nonnegative Matrix Factorisation, which we refer to as Time Evolving Nonnegative Matrix Factorisation (TENMF). In our method, we describe users and words posted in some time interval by a matrix, and use several matrices as time-sequential data. Subsequently, we apply Time Evolving Nonnegative Matrix Factorisation to these time-sequential matrices. TENMF can decompose time-sequential matrices, and can track the connection among decomposed matrices, whereas previous NMF decomposes a matrix into two lower dimension matrices arbitrarily, which might lose the time-sequential connection. Our proposed method has an adequately good performance on artificial data. Moreover, we present several results and insights from experiments using real data from Twitter.
NASA Astrophysics Data System (ADS)
Park, Hyun-Woo; Song, Aeran; Kwon, Sera; Choi, Dukhyun; Kim, Younghak; Jun, Byung-Hyuk; Kim, Han-Ki; Chung, Kwun-Bum
2018-03-01
This study suggests a sequential ambient annealing process as an excellent post-treatment method to enhance the device performance and stability of W (tungsten) doped InZnO thin film transistors (WIZO-TFTs). Sequential ambient annealing at 250 °C significantly enhanced the device performance and stability of WIZO-TFTs, compared with other post-treatment methods, such as air ambient annealing and vacuum ambient annealing at 250 °C. To understand the enhanced device performance and stability of WIZO-TFT with sequential ambient annealing, we investigate the correlations between device performance and stability and electronic structures, such as band alignment, a feature of the conduction band, and band edge states below the conduction band. The enhanced performance of WIZO-TFTs with sequential ambient annealing is related to the modification of the electronic structure. In addition, the dominant mechanism responsible for the enhanced device performance and stability of WIZO-TFTs is considered to be a change in the shallow-level and deep-level band edge states below the conduction band.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Harold R. Offord
1966-01-01
Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...
Lifelong Transfer Learning for Heterogeneous Teams of Agents in Sequential Decision Processes
2016-06-01
making (SDM) tasks in dynamic environments with simulated and physical robots . 15. SUBJECT TERMS Sequential decision making, lifelong learning, transfer...sequential decision-making (SDM) tasks in dynamic environments with both simple benchmark tasks and more complex aerial and ground robot tasks. Our work...and ground robots in the presence of disturbances: We applied our methods to the problem of learning controllers for robots with novel disturbances in
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.; ...
2017-02-07
Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.
Liao, Yuan-Xi; Xing, Chun-Hui; Israel, Matthew; Hu, Qiao-Sheng
2011-01-01
Sequential aldol condensation of aldehydes with methyl ketones followed by transition metal-catalyzed addition reactions of arylboronic acids to form β-substituted ketones is described. By using the 1,1′-spirobiindane-7,7′-diol (SPINOL)-based phosphite, an asymmetric version of this type of sequential reaction, with up to 92% ee, was also realized. Our study provided an efficient method to access β-substituted ketones and might lead to the development of other sequential/tandem reactions with transition metal-catalyzed addition reactions as the key step. PMID:21417359
Liao, Yuan-Xi; Xing, Chun-Hui; Israel, Matthew; Hu, Qiao-Sheng
2011-04-15
Sequential aldol condensation of aldehydes with methyl ketones followed by transition metal-catalyzed addition reactions of arylboronic acids to form β-substituted ketones is described. By using the 1,1'-spirobiindane-7,7'-diol (SPINOL)-based phosphite, an asymmetric version of this type of sequential reaction, with up to 92% ee, was also realized. Our study provided an efficient method to access β-substituted ketones and might lead to the development of other sequential/tandem reactions with transition metal-catalyzed addition reactions as the key step. © 2011 American Chemical Society
Satínský, Dalibor; Huclová, Jitka; Ferreira, Raquel L C; Montenegro, Maria Conceição B S M; Solich, Petr
2006-02-13
The porous monolithic columns show high performance at relatively low pressure. The coupling of short monoliths with sequential injection technique (SIA) results in a new approach to implementation of separation step to non-separation low-pressure method. In this contribution, a new separation method for simultaneous determination of ambroxol, methylparaben and benzoic acid was developed based on a novel reversed-phase sequential injection chromatography (SIC) technique with UV detection. A Chromolith SpeedROD RP-18e, 50-4.6 mm column with 10 mm precolumn and a FIAlab 3000 system with a six-port selection valve and 5 ml syringe were used for sequential injection chromatographic separations in our study. The mobile phase used was acetonitrile-tetrahydrofuran-0.05M acetic acid (10:10:90, v/v/v), pH 3.75 adjusted with triethylamine, flow rate 0.48 mlmin(-1), UV-detection was at 245 nm. The analysis time was <11 min. A new SIC method was validated and compared with HPLC. The method was found to be useful for the routine analysis of the active compounds ambroxol and preservatives (methylparaben or benzoic acid) in various pharmaceutical syrups and drops.
Rise and fall of political complexity in island South-East Asia and the Pacific.
Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth
2010-10-14
There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
NASA Astrophysics Data System (ADS)
Dassekpo, Jean-Baptiste Mawulé; Zha, Xiaoxiong; Zhan, Jiapeng; Ning, Jiaqian
Geopolymer is an energy efficient and sustainable material that is currently used in construction industry as an alternative for Portland cement. As a new material, specific mix design method is essential and efforts have been made to develop a mix design procedure with the main focus on achieving better compressive strength and economy. In this paper, a sequential addition of synthesis parameters such as fly ash-sand, alkaline liquids, plasticizer and additional water at well-defined time intervals was investigated. A total of 4 mix procedures were used to study the compressive performance on fly ash-based geopolymer mortar and the results of each method were analyzed and discussed. Experimental results show that the sequential addition of sodium hydroxide (NaOH), sodium silicate (Na2SiO3), plasticizer (PL), followed by adding water (WA) increases considerably the compressive strengths of the geopolymer-based mortar. These results clearly demonstrate the high significant influence of sequential addition of synthesis parameters on geopolymer materials compressive properties, and also provide a new mixing method for the preparation of geopolymer paste, mortar and concrete.
Mirza, Bilal; Lin, Zhiping
2016-08-01
In this paper, a meta-cognitive online sequential extreme learning machine (MOS-ELM) is proposed for class imbalance and concept drift learning. In MOS-ELM, meta-cognition is used to self-regulate the learning by selecting suitable learning strategies for class imbalance and concept drift problems. MOS-ELM is the first sequential learning method to alleviate the imbalance problem for both binary class and multi-class data streams with concept drift. In MOS-ELM, a new adaptive window approach is proposed for concept drift learning. A single output update equation is also proposed which unifies various application specific OS-ELM methods. The performance of MOS-ELM is evaluated under different conditions and compared with methods each specific to some of the conditions. On most of the datasets in comparison, MOS-ELM outperforms the competing methods. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
Optimizing Standard Sequential Extraction Protocol With Lake And Ocean Sediments
The environmental mobility/availability behavior of radionuclides in soils and sediments depends on their speciation. Experiments have been carried out to develop a simple but robust radionuclide sequential extraction method for identification of radionuclide partitioning in sed...
1984-06-01
SEQUENTIAL TESTING (Bldg. A, Room C) 1300-1330 ’ 1330-1415 1415-1445 1445-1515 BREAK 1515-1545 A TRUNCATED SEQUENTIAL PROBABILITY RATIO TEST J...suicide optical data operational testing reliability random numbers bootstrap methods missing data sequential testing fire support complex computer model carcinogenesis studies EUITION Of 1 NOV 68 I% OBSOLETE a ...contributed papers can be ascertained from the titles of the
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Radioanalytical Chemistry for Automated Nuclear Waste Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devol, Timothy A.
2005-06-01
Comparison of different pulse shape discrimination methods was performed under two different experimental conditions and the best method was identified. Beta/gamma discrimination of 90Sr/90Y and 137Cs was performed using a phoswich detector made of BC400 (2.5 cm OD x 1.2 cm) and BGO (2.5 cm O.D. x 2.5 cm ) scintillators. Alpha/gamma discrimination of 210Po and 137Cs was performed using a CsI:Tl (2.8 x 1.4 x 1.4 cm3) scintillation crystal. The pulse waveforms were digitized with a DGF-4c (X-Ray Instrumentation Associates) and analyzed offline with IGOR Pro software (Wavemetrics, Inc.). The four pulse shape discrimination methods that were compared include:more » rise time discrimination, digital constant fraction discrimination, charge ratio, and constant time discrimination (CTD) methods. The CTD method is the ratio of the pulse height at a particular time after the beginning of the pulse to the time at the maximum pulse height. The charge comparison method resulted in a Figure of Merit (FoM) of 3.3 (9.9 % spillover) and 3.7 (0.033 % spillover) for the phoswich and the CsI:Tl scintillator setups, respectively. The CTD method resulted in a FoM of 3.9 (9.2 % spillover) and 3.2 (0.25 % spillover), respectively. Inverting the pulse shape data typically resulted in a significantly higher FoM than conventional methods, but there was no reduction in % spillover values. This outcome illustrates that the FoM may not be a good scheme for the quantification of a system to perform pulse shape discrimination. Comparison of several pulse shape discrimination (PSD) methods was performed as a means to compare traditional analog and digital PSD methods on the same scintillation pulses. The X-ray Instrumentation Associates DGF-4C (40 Msps, 14-bit) was used to digitize waveforms from a CsI:Tl crystal and BC400/BGO phoswich detector.« less
Koopmeiners, Joseph S.; Feng, Ziding
2015-01-01
Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180
Prisant, L M; Resnick, L M; Hollenberg, S M
2001-06-01
The aim of this study was to assess the accuracy of sequential same arm blood pressure measurement by the mercury sphygmomanometer with the oscillometric blood pressure measurements from a device that also determines arterial elasticity. A prospective, multicentre, clinical study evaluated sequential same arm blood pressure measurements, using a mercury sphygmomanometer (Baumanometer, W. A. Baum Co., Inc., Copiague, New York, USA) and an oscillometric non-invasive device that calculates arterial elasticity (CVProfilor DO-2020 Cardiovascular Profiling System, Hypertension Diagnostics, Inc., Eagan, Minnesota, USA). Blood pressure was measured supine in triplicate, 3 min apart in a randomized sequence after a period of rest. The study population of 230 normotensive and hypertensive subjects included 57% females, 51% Caucasians, and 33% African Americans. The mean difference between test methods of systolic blood pressure, diastolic blood pressure, and heart rate was -3.2 +/- 6.9 mmHg, +0.8 +/- 5.9 mmHg, and +1.0 +/- 5.7 beats/minute. For systolic and diastolic blood pressure, 60.9 and 70.4% of sequential measurements by each method were within +/- 5 mmHg. Few or no points fell beyond the mean +/- 2 standard deviations lines for each cuff bladder size. Sequential same arm measurements of the CVProfilor DO-2020 Cardiovascular Profiling System measures blood pressure by an oscillometric method (dynamic linear deflation) with reasonable agreement with a mercury sphygmomanometer.
Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.
Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty
2011-10-01
The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.
Ichikawa, Shota; Kamishima, Tamotsu; Sutherland, Kenneth; Fukae, Jun; Katayama, Kou; Aoki, Yuko; Okubo, Takanobu; Okino, Taichi; Kaneda, Takahiko; Takagi, Satoshi; Tanimura, Kazuhide
2017-10-01
We have developed a refined computer-based method to detect joint space narrowing (JSN) progression with the joint space narrowing progression index (JSNPI) by superimposing sequential hand radiographs. The purpose of this study is to assess the validity of a computer-based method using images obtained from multiple institutions in rheumatoid arthritis (RA) patients. Sequential hand radiographs of 42 patients (37 females and 5 males) with RA from two institutions were analyzed by a computer-based method and visual scoring systems as a standard of reference. The JSNPI above the smallest detectable difference (SDD) defined JSN progression on the joint level. The sensitivity and specificity of the computer-based method for JSN progression was calculated using the SDD and a receiver operating characteristic (ROC) curve. Out of 314 metacarpophalangeal joints, 34 joints progressed based on the SDD, while 11 joints widened. Twenty-one joints progressed in the computer-based method, 11 joints in the scoring systems, and 13 joints in both methods. Based on the SDD, we found lower sensitivity and higher specificity with 54.2 and 92.8%, respectively. At the most discriminant cutoff point according to the ROC curve, the sensitivity and specificity was 70.8 and 81.7%, respectively. The proposed computer-based method provides quantitative measurement of JSN progression using sequential hand radiographs and may be a useful tool in follow-up assessment of joint damage in RA patients.
ERIC Educational Resources Information Center
Karademir, Yavuz; Demir, Selcuk Besir
2015-01-01
The aim of this study is to ascertain the problems social studies teachers face in the teaching of topics covered in 8th grade TRHRK Course. The study was conducted in line with explanatory sequential mixed method design, which is one of the mixed research method, was used. The study involves three phases. In the first step, exploratory process…
Jha, Vishwajeet; Kondekar, Nagendra B; Kumar, Pradeep
2010-06-18
A novel and general method for asymmetric synthesis of both syn/anti-1,3-amino alcohols is described. The method uses proline-catalyzed sequential alpha-aminoxylation/ alpha-amination and Horner-Wadsworth-Emmons (HWE) olefination of aldehydes as the key step. By using this method, a short synthesis of a bioactive molecule, (R)-1-((S)-1-methylpyrrolidin-2-yl)-5-phenylpentan-2-ol, is also accomplished.
Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.
Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M
2011-02-01
To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
A Novel Ship-Tracking Method for GF-4 Satellite Sequential Images.
Yao, Libo; Liu, Yong; He, You
2018-06-22
The geostationary remote sensing satellite has the capability of wide scanning, persistent observation and operational response, and has tremendous potential for maritime target surveillance. The GF-4 satellite is the first geostationary orbit (GEO) optical remote sensing satellite with medium resolution in China. In this paper, a novel ship-tracking method in GF-4 satellite sequential imagery is proposed. The algorithm has three stages. First, a local visual saliency map based on local peak signal-to-noise ratio (PSNR) is used to detect ships in a single frame of GF-4 satellite sequential images. Second, the accuracy positioning of each potential target is realized by a dynamic correction using the rational polynomial coefficients (RPCs) and automatic identification system (AIS) data of ships. Finally, an improved multiple hypotheses tracking (MHT) algorithm with amplitude information is used to track ships by further removing the false targets, and to estimate ships’ motion parameters. The algorithm has been tested using GF-4 sequential images and AIS data. The results of the experiment demonstrate that the algorithm achieves good tracking performance in GF-4 satellite sequential images and estimates the motion information of ships accurately.
ERIC Educational Resources Information Center
Boekkooi-Timminga, Ellen
Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…
ERIC Educational Resources Information Center
Mathinos, Debra A.; Leonard, Ann Scheier
The study examines the use of LOGO, a computer language, with 19 learning disabled (LD) and 19 non-LD students in grades 4-6. Ss were randomly assigned to one of two instructional groups: sequential or whole-task, each with 10 LD and 10 non-LD students. The sequential method features a carefully ordered plan for teaching LOGO commands; the…
Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine
2018-05-15
A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.
Analysis of Optimal Sequential State Discrimination for Linearly Independent Pure Quantum States.
Namkung, Min; Kwon, Younghun
2018-04-25
Recently, J. A. Bergou et al. proposed sequential state discrimination as a new quantum state discrimination scheme. In the scheme, by the successful sequential discrimination of a qubit state, receivers Bob and Charlie can share the information of the qubit prepared by a sender Alice. A merit of the scheme is that a quantum channel is established between Bob and Charlie, but a classical communication is not allowed. In this report, we present a method for extending the original sequential state discrimination of two qubit states to a scheme of N linearly independent pure quantum states. Specifically, we obtain the conditions for the sequential state discrimination of N = 3 pure quantum states. We can analytically provide conditions when there is a special symmetry among N = 3 linearly independent pure quantum states. Additionally, we show that the scenario proposed in this study can be applied to quantum key distribution. Furthermore, we show that the sequential state discrimination of three qutrit states performs better than the strategy of probabilistic quantum cloning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.
Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.
Neel, Sean T
2014-11-01
A cost analysis was performed to evaluate the effect on physicians in the United States of a transition from delayed sequential cataract surgery to immediate sequential cataract surgery. Financial and efficiency impacts of this change were evaluated to determine whether efficiency gains could offset potential reduced revenue. A cost analysis using Medicare cataract surgery volume estimates, Medicare 2012 physician cataract surgery reimbursement schedules, and estimates of potential additional office visit revenue comparing immediate sequential cataract surgery with delayed sequential cataract surgery for a single specialty ophthalmology practice in West Tennessee. This model should give an indication of the effect on physicians on a national basis. A single specialty ophthalmology practice in West Tennessee was found to have a cataract surgery revenue loss of $126,000, increased revenue from office visits of $34,449 to $106,271 (minimum and maximum offset methods), and a net loss of $19,900 to $91,700 (base case) with the conversion to immediate sequential cataract surgery. Physicians likely stand to lose financially, and this loss cannot be offset by increased patient visits under the current reimbursement system. This may result in physician resistance to converting to immediate sequential cataract surgery, gaming, and supplier-induced demand.
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
SIMPLE: a sequential immunoperoxidase labeling and erasing method.
Glass, George; Papin, Jason A; Mandell, James W
2009-10-01
The ability to simultaneously visualize expression of multiple antigens in cells and tissues can provide powerful insights into cellular and organismal biology. However, standard methods are limited to the use of just two or three simultaneous probes and have not been widely adopted for routine use in paraffin-embedded tissue. We have developed a novel approach called sequential immunoperoxidase labeling and erasing (SIMPLE) that enables the simultaneous visualization of at least five markers within a single tissue section. Utilizing the alcohol-soluble peroxidase substrate 3-amino-9-ethylcarbazole, combined with a rapid non-destructive method for antibody-antigen dissociation, we demonstrate the ability to erase the results of a single immunohistochemical stain while preserving tissue antigenicity for repeated rounds of labeling. SIMPLE is greatly facilitated by the use of a whole-slide scanner, which can capture the results of each sequential stain without any information loss.
Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time
NASA Astrophysics Data System (ADS)
Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.
2017-12-01
We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Effects of scalding method and sequential tanks on broiler processing wastewater loadings
USDA-ARS?s Scientific Manuscript database
The effects of scalding time and temperature, and sequential scalding tanks was evaluated based on impact to poultry processing wastewater (PPW) stream loading rates following the slaughter of commercially raised broilers. On 3 separate weeks (trials), broilers were obtained following feed withdrawa...
Phosphorus concentrations in sequentially fractionated soil samples as affected by digestion methods
USDA-ARS?s Scientific Manuscript database
Sequential fractionation has been used for several decades for improving our understanding on the effects of agricultural practices and management on the lability and bioavailability of phosphorus in soil, manure, and other soil amendments. Nevertheless, there have been no reports on how manipulatio...
THRESHOLD ELEMENTS AND THE DESIGN OF SEQUENTIAL SWITCHING NETWORKS.
The report covers research performed from March 1966 to March 1967. The major topics treated are: (1) methods for finding weight- threshold vectors...that realize a given switching function in multi- threshold linear logic; (2) synthesis of sequential machines by means of shift registers and simple
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Sequential infiltration synthesis for enhancing multiple-patterning lithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih
Simplified methods of multiple-patterning photolithography using sequential infiltration synthesis to modify the photoresist such that it withstands plasma etching better than unmodified resist and replaces one or more hard masks and/or a freezing step in MPL processes including litho-etch-litho-etch photolithography or litho-freeze-litho-etch photolithography.
Phosphorus concentrations in sequentially fractionated soil samples as affected by digestion methods
USDA-ARS?s Scientific Manuscript database
Sequential fractionation has been used for several decades for improving our understanding on the effects of agricultural practices and management on the lability and bioavailability of P in soil, manure, and other soil amendments. Nevertheless, there have been no reports on how manipulation of diff...
The Motivating Language of Principals: A Sequential Transformative Strategy
ERIC Educational Resources Information Center
Holmes, William Tobias
2012-01-01
This study implemented a Sequential Transformative Mixed Methods design with teachers (as recipients) and principals (to give voice) in the examination of principal talk in two different school accountability contexts (Continuously Improving and Continuously Zigzag) using the conceptual framework of Motivating Language Theory. In phase one,…
Makrakis, Vassilios; Kostoulas-Makrakis, Nelly
2016-02-01
Quantitative and qualitative approaches to planning and evaluation in education for sustainable development have often been treated by practitioners from a single research paradigm. This paper discusses the utility of mixed method evaluation designs which integrate qualitative and quantitative data through a sequential transformative process. Sequential mixed method data collection strategies involve collecting data in an iterative process whereby data collected in one phase contribute to data collected in the next. This is done through examples from a programme addressing the 'Reorientation of University Curricula to Address Sustainability (RUCAS): A European Commission Tempus-funded Programme'. It is argued that the two approaches are complementary and that there are significant gains from combining both. Using methods from both research paradigms does not, however, mean that the inherent differences among epistemologies and methodologies should be neglected. Based on this experience, it is recommended that using a sequential transformative mixed method evaluation can produce more robust results than could be accomplished using a single approach in programme planning and evaluation focussed on education for sustainable development. Copyright © 2015 Elsevier Ltd. All rights reserved.
Method of Real-Time Principal-Component Analysis
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu
2005-01-01
Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.
Paliwoda, Rebecca E; Li, Feng; Reid, Michael S; Lin, Yanwen; Le, X Chris
2014-06-17
Functionalizing nanomaterials for diverse analytical, biomedical, and therapeutic applications requires determination of surface coverage (or density) of DNA on nanomaterials. We describe a sequential strand displacement beacon assay that is able to quantify specific DNA sequences conjugated or coconjugated onto gold nanoparticles (AuNPs). Unlike the conventional fluorescence assay that requires the target DNA to be fluorescently labeled, the sequential strand displacement beacon method is able to quantify multiple unlabeled DNA oligonucleotides using a single (universal) strand displacement beacon. This unique feature is achieved by introducing two short unlabeled DNA probes for each specific DNA sequence and by performing sequential DNA strand displacement reactions. Varying the relative amounts of the specific DNA sequences and spacing DNA sequences during their coconjugation onto AuNPs results in different densities of the specific DNA on AuNP, ranging from 90 to 230 DNA molecules per AuNP. Results obtained from our sequential strand displacement beacon assay are consistent with those obtained from the conventional fluorescence assays. However, labeling of DNA with some fluorescent dyes, e.g., tetramethylrhodamine, alters DNA density on AuNP. The strand displacement strategy overcomes this problem by obviating direct labeling of the target DNA. This method has broad potential to facilitate more efficient design and characterization of novel multifunctional materials for diverse applications.
Xie, Yian; Shao, Feng; Wang, Yaoming; Xu, Tao; Wang, Deliang; Huang, Fuqiang
2015-06-17
Sequential deposition is a widely adopted method to prepare CH3NH3PbI3 on mesostructured TiO2 electrode for organic lead halide perovskite solar cells. However, this method often suffers from the uncontrollable crystal size, surface morphology, and residual PbI2 in the resulting CH3NH3PbI3, which are all detrimental to the device performance. We herein present an optimized sequential solution deposition method by introducing different amount of CH3NH3I in PbI2 precursor solution in the first step to prepare CH3NH3PbI3 absorber on mesoporous TiO2 substrates. The addition of CH3NH3I in PbI2 precursor solution can affect the crystallization and composition of PbI2 raw films, resulting in the variation of UV-vis absorption and surface morphology. Proper addition of CH3NH3I not only enhances the absorption but also improves the efficiency of CH3NH3PbI3 solar cells from 11.13% to 13.37%. Photoluminescence spectra suggest that the improvement of device performance is attributed to the decrease of recombination rate of carriers in CH3NH3PbI3 absorber. This current method provides a highly repeatable route for enhancing the efficiency of CH3NH3PbI3 solar cell in the sequential solution deposition method.
Yang, Sejung; Park, Junhee; Lee, Hanuel; Kim, Soohyun; Lee, Byung-Uk; Chung, Kee-Yang; Oh, Byungho
2016-01-01
Photographs of skin wounds have the most important information during the secondary intention healing (SIH). However, there is no standard method for handling those images and analyzing them efficiently and conveniently. To investigate the sequential changes of SIH depending on the body sites using a color patch method. We performed retrospective reviews of 30 patients (11 facial and 19 non-facial areas) who underwent SIH for the restoration of skin defects and captured sequential photographs with a color patch which is specially designed for automatically calculating defect and scar sizes. Using a novel image analysis method with a color patch, skin defects were calculated more accurately (range of error rate: -3.39% ~ + 3.05%). All patients had smaller scar size than the original defect size after SIH treatment (rates of decrease: 18.8% ~ 86.1%), and facial area showed significantly higher decrease rate compared with the non-facial area such as scalp and extremities (67.05 ± 12.48 vs. 53.29 ± 18.11, P < 0.05). From the result of estimating the date corresponding to the half of the final decrement, all of the facial area showed improvements within two weeks (8.45 ± 3.91), and non-facial area needed 14.33 ± 9.78 days. From the results of sequential changes of skin defects, SIH can be recommended as an alternative treatment method for restoration with more careful dressing for initial two weeks.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2015-08-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
NASA Astrophysics Data System (ADS)
Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2016-04-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
Space-Time Fluid-Structure Interaction Computation of Flapping-Wing Aerodynamics
2013-12-01
SST-VMST." The structural mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is...mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is ap- plicable to some classes of FSI...we use the ST-VMS method in combination with the ST-SUPS method. The structural mechanics computations are mostly based on the Kirchhoff –Love shell
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H
2005-07-01
Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.
Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick; Wendt, Fabian; Musial, Walter
The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less
ERIC Educational Resources Information Center
Abikoff, Howard; McGough, James; Vitiello, Benedetto; McCracken, James; Davies, Mark; Walkup, John; Riddle, Mark; Oatis, Melvin; Greenhill, Laurence; Skrobala, Anne; March, John; Gammon, Pat; Robinson, James; Lazell, Robert; McMahon, Donald J.; Ritz, Louise
2005-01-01
Objective: Attention-deficit/hyperactivity disorder (ADHD) is often accompanied by clinically significant anxiety, but few empirical data guide treatment of children meeting full DSM-IV criteria for ADHD and anxiety disorders (ADHD/ANX). This study examined the efficacy of sequential pharmacotherapy for ADHD/ANX children. Method: Children, age 6…
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Terminating Sequential Delphi Survey Data Collection
ERIC Educational Resources Information Center
Kalaian, Sema A.; Kasim, Rafa M.
2012-01-01
The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.
MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S
2005-06-01
Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.
ERIC Educational Resources Information Center
Cunningham, Jennifer L.
2013-01-01
The purpose of this sequential, explanatory mixed methods research study was to understand what factors influenced African American maternal intentions to get their daughters aged 9 years to 12 years vaccinated in Alabama. In the first, quantitative phase of the study, the research questions focused on identifying the predictive power of eleven…
Backfilled, self-assembled monolayers and methods of making same
Fryxell, Glen E [Kennewick, WA; Zemanian, Thomas S [Richland, WA; Addleman, R Shane [Benton City, WA; Aardahl, Christopher L [Sequim, WA; Zheng, Feng [Richland, WA; Busche, Brad [Raleigh, NC; Egorov, Oleg B [West Richland, WA
2009-06-30
Backfilled, self-assembled monolayers and methods of making the same are disclosed. The self-assembled monolayer comprises at least one functional organosilane species and a substantially random dispersion of at least one backfilling organosilane species among the functional organosilane species, wherein the functional and backfilling organosilane species have been sequentially deposited on a substrate. The method comprises depositing sequentially a first organosilane species followed by a backfilling organosilane species, and employing a relaxation agent before or during deposition of the backfilling organosilane species, wherein the first and backfilling organosilane species are substantially randomly dispersed on a substrate.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi
2013-06-21
A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well.
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi
2013-01-01
A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well. PMID:23793021
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
NASA Astrophysics Data System (ADS)
Lee, Han Sang; Kim, Hyeun A.; Kim, Hyeonjin; Hong, Helen; Yoon, Young Cheol; Kim, Junmo
2016-03-01
In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.
ChIP-re-ChIP: Co-occupancy Analysis by Sequential Chromatin Immunoprecipitation.
Beischlag, Timothy V; Prefontaine, Gratien G; Hankinson, Oliver
2018-01-01
Chromatin immunoprecipitation (ChIP) exploits the specific interactions between DNA and DNA-associated proteins. It can be used to examine a wide range of experimental parameters. A number of proteins bound at the same genomic location can identify a multi-protein chromatin complex where several proteins work together to regulate gene transcription or chromatin configuration. In many instances, this can be achieved using sequential ChIP; or simply, ChIP-re-ChIP. Whether it is for the examination of specific transcriptional or epigenetic regulators, or for the identification of cistromes, the ability to perform a sequential ChIP adds a higher level of power and definition to these analyses. In this chapter, we describe a simple and reliable method for the sequential ChIP assay.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
Removal of Co(II) from waste water using dry cow dung powder : a green ambrosia to soil
NASA Astrophysics Data System (ADS)
Bagla, Hemlata; Khilnani, Roshan
2015-04-01
Co(II) is one of the hazardous products found in the waste streams. The anthropogenic activities are major sources of Co(II) in our environment. Some of the well-established processes such as chemical precipitation, membrane process, liquid extraction and ion exchange have been applied as a tool for the removal of this metal ion [1]. All the above methods are not considered to be greener due to some of their shortcomings such as incomplete metal ion removal, high requirement of energy and reagents, generation of toxic sludge or other waste materials which in turn require further treatments for their cautious disposal. The present investigation entails the application of dry cow dung powder (DCP) as an indigenous, inexpensive and eco-friendly material for the removal of Co(II) from aqueous medium. DCP, is naturally available bio-organic, complex, polymorphic humified fecal matter of cow and is enriched with minerals, carbohydrates, fats, proteins, bile pigments, aliphatic-aromatic species such as 'Humic acid' (HA), Fulvic acid, Ulmic acid [2,3]. Batch biosorption experiments were conducted employing 60Co(II) as a tracer and effect of various process parameters such as pH (1-8), temperature (283-363K), amount of biosorbent (5-40 g/L), time of equilibration (0-30 min), agitation speed (0-4000 rpm), concentration of initial metal ions (0.5-20 mg/mL) and interfering effect of different organic as well as inorganic salts were studied. The Kinetic studies were carried out employing various models but the best fitting was given by Lagergren Pseudo-second order model [4] with high correlation coefficient R2 value of 0.999 and adsorption capacity of 2.31 mg/g. The thermodynamic parameters for biosorption were also evaluated which indicated spontaneous and exothermic process with high affinity of DCP for Co(II). Many naturally available materials are used for biosorption of hazardous metal pollutants, where most of them are physically or chemically modified. In this research work, DCP has been utilized without pre or post chemical treatment. Thus it manifests the principal of green chemistry and proves to be an eco-friendly biosorbent. References 1. N.S. Barot, H.K. Bagla, Biosorption of Radiotoxic 90Sr by Green adsorbent: Dry Cow Dung Powder, Journal of Radioanalytical and Nuclear Chemistry, 294, pp. 81-86, (2012). 2. N.S. Barot, H.K. Bagla, Eco-friendly waste water treatment by cow-dung powder (Adsorption studies of Cr(III), Cr (VI) & Cd(II) using Tracer Technique), Desalination & Water Treatment, 38(1-3), pp.104-113, (2012). 3. N.S. Barot, R.P. Khilnani, H.K. Bagla, "Biosorptive profile of Synthetic and Natural Humiresin for the remediation of Metallic Water Pollutants", Journal of Radioanalytical and Nuclear Chemistry, 301(1):1-9, (2014). 4. S. Lagergren, Zur theorie der sogenannten adsorption geloster stoffe. Kungliga Svenska Vetenskapsakademiens, Handlingar 24(4):1-39, (1898).
A three-dimensional quality-guided phase unwrapping method for MR elastography
NASA Astrophysics Data System (ADS)
Wang, Huifang; Weaver, John B.; Perreard, Irina I.; Doyley, Marvin M.; Paulsen, Keith D.
2011-07-01
Magnetic resonance elastography (MRE) uses accumulated phases that are acquired at multiple, uniformly spaced relative phase offsets, to estimate harmonic motion information. Heavily wrapped phase occurs when the motion is large and unwrapping procedures are necessary to estimate the displacements required by MRE. Two unwrapping methods were developed and compared in this paper. The first method is a sequentially applied approach. The three-dimensional MRE phase image block for each slice was processed by two-dimensional unwrapping followed by a one-dimensional phase unwrapping approach along the phase-offset direction. This unwrapping approach generally works well for low noise data. However, there are still cases where the two-dimensional unwrapping method fails when noise is high. In this case, the baseline of the corrupted regions within an unwrapped image will not be consistent. Instead of separating the two-dimensional and one-dimensional unwrapping in a sequential approach, an interleaved three-dimensional quality-guided unwrapping method was developed to combine both the two-dimensional phase image continuity and one-dimensional harmonic motion information. The quality of one-dimensional harmonic motion unwrapping was used to guide the three-dimensional unwrapping procedures and it resulted in stronger guidance than in the sequential method. In this work, in vivo results generated by the two methods were compared.
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.
1991-01-01
The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.
A generic motif discovery algorithm for sequential data.
Jensen, Kyle L; Styczynski, Mark P; Rigoutsos, Isidore; Stephanopoulos, Gregory N
2006-01-01
Motif discovery in sequential data is a problem of great interest and with many applications. However, previous methods have been unable to combine exhaustive search with complex motif representations and are each typically only applicable to a certain class of problems. Here we present a generic motif discovery algorithm (Gemoda) for sequential data. Gemoda can be applied to any dataset with a sequential character, including both categorical and real-valued data. As we show, Gemoda deterministically discovers motifs that are maximal in composition and length. As well, the algorithm allows any choice of similarity metric for finding motifs. Finally, Gemoda's output motifs are representation-agnostic: they can be represented using regular expressions, position weight matrices or any number of other models for any type of sequential data. We demonstrate a number of applications of the algorithm, including the discovery of motifs in amino acids sequences, a new solution to the (l,d)-motif problem in DNA sequences and the discovery of conserved protein substructures. Gemoda is freely available at http://web.mit.edu/bamel/gemoda
Robust inference for group sequential trials.
Ganju, Jitendra; Lin, Yunzhi; Zhou, Kefei
2017-03-01
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests. Copyright © 2017 John Wiley & Sons, Ltd.
Parallel heuristics for scalable community detection
Lu, Hao; Halappanavar, Mahantesh; Kalyanaraman, Ananth
2015-08-14
Community detection has become a fundamental operation in numerous graph-theoretic applications. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method ismore » also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains. Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing real speedups of up to 16x using 32 threads.« less
A Pocock Approach to Sequential Meta-Analysis of Clinical Trials
ERIC Educational Resources Information Center
Shuster, Jonathan J.; Neu, Josef
2013-01-01
Three recent papers have provided sequential methods for meta-analysis of two-treatment randomized clinical trials. This paper provides an alternate approach that has three desirable features. First, when carried out prospectively (i.e., we only have the results up to the time of our current analysis), we do not require knowledge of the…
ERIC Educational Resources Information Center
Economou, A.; Tzanavaras, P. D.; Themelis, D. G.
2005-01-01
The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
ERIC Educational Resources Information Center
Penteado, Jose C.; Masini, Jorge Cesar
2011-01-01
Influence of the solvent strength determined by the addition of a mobile-phase organic modifier and pH on chromatographic separation of sorbic acid and vanillin has been investigated by the relatively new technique, liquid sequential injection chromatography (SIC). This technique uses reversed-phase monolithic stationary phase to execute fast…
Lexical and Grammatical Associations in Sequential Bilingual Preschoolers
ERIC Educational Resources Information Center
Kohnert, Kathryn; Kan, Pui Fong; Conboy, Barbara T.
2010-01-01
Purpose: The authors investigated potential relationships between traditional linguistic domains (words, grammar) in the first (L1) and second (L2) languages of young sequential bilingual preschool children. Method: Participants were 19 children, ages 2;11 (years;months) to 5;2 (M = 4;3) who began learning Hmong as the L1 from birth and English as…
ERIC Educational Resources Information Center
Ramaswamy, Ravishankar; Dix, Edward F.; Drew, Janet E.; Diamond, James J.; Inouye, Sharon K.; Roehl, Barbara J. O.
2011-01-01
Purpose of the Study: Delirium is a widespread concern for hospitalized seniors, yet is often unrecognized. A comprehensive and sequential intervention (CSI) aiming to effect change in clinician behavior by improving knowledge about delirium was tested. Design and Methods: A 2-day CSI program that consisted of progressive 4-part didactic series,…
Jacobs, Ian E.; Aasen, Erik W.; Oliveira, Julia L.; ...
2016-03-23
Doping polymeric semiconductors often drastically reduces the solubility of the polymer, leading to difficulties in processing doped films. Here, we compare optical, electrical, and morphological properties of P3HT films doped with F4TCNQ, both from mixed solutions and using sequential solution processing with orthogonal solvents. We demonstrate that sequential doping occurs rapidly (<1 s), and that the film doping level can be precisely controlled by varying the concentration of the doping solution. Furthermore, the choice of sequential doping solvent controls whether dopant anions are included or excluded from polymer crystallites. Atomic force microscopy (AFM) reveals that sequential doping produces significantly moremore » uniform films on the nanoscale than the mixed-solution method. In addition, we show that mixed-solution doping induces the formation of aggregates even at low doping levels, resulting in drastic changes to film morphology. Sequentially coated films show 3–15 times higher conductivities at a given doping level than solution-doped films, with sequentially doped films processed to exclude dopant anions from polymer crystallites showing the highest conductivities. In conclusion, we propose a mechanism for doping induced aggregation in which the shift of the polymer HOMO level upon aggregation couples ionization and solvation energies. To show that the methodology is widely applicable, we demonstrate that several different polymer:dopant systems can be prepared by sequential doping.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, Ian E.; Aasen, Erik W.; Oliveira, Julia L.
Doping polymeric semiconductors often drastically reduces the solubility of the polymer, leading to difficulties in processing doped films. Here, we compare optical, electrical, and morphological properties of P3HT films doped with F4TCNQ, both from mixed solutions and using sequential solution processing with orthogonal solvents. We demonstrate that sequential doping occurs rapidly (<1 s), and that the film doping level can be precisely controlled by varying the concentration of the doping solution. Furthermore, the choice of sequential doping solvent controls whether dopant anions are included or excluded from polymer crystallites. Atomic force microscopy (AFM) reveals that sequential doping produces significantly moremore » uniform films on the nanoscale than the mixed-solution method. In addition, we show that mixed-solution doping induces the formation of aggregates even at low doping levels, resulting in drastic changes to film morphology. Sequentially coated films show 3–15 times higher conductivities at a given doping level than solution-doped films, with sequentially doped films processed to exclude dopant anions from polymer crystallites showing the highest conductivities. In conclusion, we propose a mechanism for doping induced aggregation in which the shift of the polymer HOMO level upon aggregation couples ionization and solvation energies. To show that the methodology is widely applicable, we demonstrate that several different polymer:dopant systems can be prepared by sequential doping.« less
Constrained multiple indicator kriging using sequential quadratic programming
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Erhan Tercan, A.
2012-11-01
Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.
Lade, Harshad; Kadam, Avinash; Paul, Diby; Govindwar, Sanjay
2015-01-01
Release of textile azo dyes to the environment is an issue of health concern while the use of microorganisms has proved to be the best option for remediation. Thus, in the present study, a bacterial consortium consisting of Providencia rettgeri strain HSL1 and Pseudomonas sp. SUK1 has been investigated for degradation and detoxification of structurally different azo dyes. The consortium showed 98-99 % decolorization of all the selected azo dyes viz. Reactive Black 5 (RB 5), Reactive Orange 16 (RO 16), Disperse Red 78 (DR 78) and Direct Red 81 (DR 81) within 12 to 30 h at 100 mg L-1 concentration at 30 ± 0.2 °C under microaerophilic, sequential aerobic/microaerophilic and microaerophilic/aerobic processes. However, decolorization under microaerophilic conditions viz. RB 5 (0.26 mM), RO 16 (0.18 mM), DR 78 (0.20 mM) and DR 81 (0.23 mM) and sequential aerobic/microaerophilic processes viz. RB 5 (0.08 mM), RO 16 (0.06 mM), DR 78 (0.07 mM) and DR 81 (0.09 mM) resulted into the formation of aromatic amines. In distinction, sequential microaerophilic/ aerobic process doesn’t show the formation of amines. Additionally, 62-72 % reduction in total organic carbon content was observed in all the dyes decolorized broths under sequential microaerophilic/aerobic processes suggesting the efficacy of method in mineralization of dyes. Notable induction within the levels of azoreductase and NADH-DCIP reductase (97 and 229 % for RB 5, 55 and 160 % for RO 16, 63 and 196 % for DR 78, 108 and 258 % for DR 81) observed under sequential microaerophilic/aerobic processes suggested their critical involvements in the initial breakdown of azo bonds, whereas, a slight increase in the levels of laccase and veratryl alcohol oxidase confirmed subsequent oxidation of formed amines. Also, the acute toxicity assay with Daphnia magna revealed the nontoxic nature of the dye-degraded metabolites under sequential microaerophilic/aerobic processes. As biodegradation under sequential microaerophilic/aerobic process completely detoxified all the selected textile azo dyes, further efforts should be made to implement such methods for large scale dye wastewater treatment technologies. PMID:26417357
An extended sequential goodness-of-fit multiple testing method for discrete data.
Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo
2017-10-01
The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.
Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.
Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric
2017-02-01
Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
2001-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
1999-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Sequential change detection and monitoring of temporal trends in random-effects meta-analysis.
Dogo, Samson Henry; Clark, Allan; Kulinskaya, Elena
2017-06-01
Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter τ 2 . In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Mier Muth, A. M.; Willsky, A. S.
1978-01-01
In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.
Sequential structural damage diagnosis algorithm using a change point detection method
NASA Astrophysics Data System (ADS)
Noh, H.; Rajagopal, R.; Kiremidjian, A. S.
2013-11-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
A novel method for the sequential removal and separation of multiple heavy metals from wastewater.
Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang
2018-01-15
A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.
Breaking from binaries - using a sequential mixed methods design.
Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan
2014-03-01
To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.
Transboundary influences of paniculate matter less than or equal to 2.5 um in aerodynamic diameter (PM2.5,) have been investigated in a U.S.-Mexican border region using a dual fine particle sequential sampler (DFPSS) and tapered element oscillating microbalance (TEOM). Daily me...
Method of selective reduction of halodisilanes with alkyltin hydrides
D'Errico, John J.; Sharp, Kenneth G.
1989-01-01
The invention relates to the selective and sequential reduction of halodisilanes by reacting these compounds at room temperature or below with trialkyltin hydrides or dialkyltin dihydrides without the use of free radical intermediates. The alkyltin hydrides selectively and sequentially reduce the Si-Cl, Si-Br or Si-I bonds while leaving intact the Si-Si and Si-F bonds present.
ERIC Educational Resources Information Center
Jacobson, Peggy F.; Walden, Patrick R.
2013-01-01
Purpose: This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Method: Narrative samples were…
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo
2015-01-01
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
An algorithm for propagating the square-root covariance matrix in triangular form
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1976-01-01
A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.
System and method for detecting components of a mixture including tooth elements for alignment
Sommer, Gregory Jon; Schaff, Ulrich Y.
2016-11-22
Examples are described including assay platforms having tooth elements. An impinging element may sequentially engage tooth elements on the assay platform to sequentially align corresponding detection regions with a detection unit. In this manner, multiple measurements may be made of detection regions on the assay platform without necessarily requiring the starting and stopping of a motor.
van Staden, J F; Mashamba, Mulalo G; Stefan, Raluca I
2002-09-01
An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid is proposed. A solution of 0.1 mol L(-1) sodium chloride is used as carrier. Titration is achieved by aspirating acetic acid samples between two strong base-zone volumes into a holding coil and by channelling the stack of well-defined zones with flow reversal through a reaction coil to a potentiometric sensor where the peak widths were measured. A linear relationship between peak width and logarithm of the acid concentration was obtained in the range 1-9 g/100 mL. Vinegar samples were analysed without any sample pre-treatment. The method has a relative standard deviation of 0.4% with a sample frequency of 28 samples per hour. The results revealed good agreement between the proposed sequential injection and an automated batch titration method.
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.
Xia, Jing; Wang, Michelle Yongmei
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
von Gunten, Konstantin; Alam, Md Samrat; Hubmann, Magdalena; Ok, Yong Sik; Konhauser, Kurt O; Alessi, Daniel S
2017-07-01
A modified Community Bureau of Reference (CBR) sequential extraction method was tested to assess the composition of untreated pyrogenic carbon (biochar) and oil sands petroleum coke. Wood biochar samples were found to contain lower concentrations of metals, but had higher fractions of easily mobilized alkaline earth and transition metals. Sewage sludge biochar was determined to be less recalcitrant and had higher total metal concentrations, with most of the metals found in the more resilient extraction fractions (oxidizable, residual). Petroleum coke was the most stable material, with a similar metal distribution pattern as the sewage sludge biochar. The applied sequential extraction method represents a suitable technique to recover metals from these materials, and is a valuable tool in understanding the metal retaining and leaching capability of various biochar types and carbonaceous petroleum coke samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Harden, Bradley J; Nichols, Scott R; Frueh, Dominique P
2014-09-24
Nuclear magnetic resonance (NMR) studies of larger proteins are hampered by difficulties in assigning NMR resonances. Human intervention is typically required to identify NMR signals in 3D spectra, and subsequent procedures depend on the accuracy of this so-called peak picking. We present a method that provides sequential connectivities through correlation maps constructed with covariance NMR, bypassing the need for preliminary peak picking. We introduce two novel techniques to minimize false correlations and merge the information from all original 3D spectra. First, we take spectral derivatives prior to performing covariance to emphasize coincident peak maxima. Second, we multiply covariance maps calculated with different 3D spectra to destroy erroneous sequential correlations. The maps are easy to use and can readily be generated from conventional triple-resonance experiments. Advantages of the method are demonstrated on a 37 kDa nonribosomal peptide synthetase domain subject to spectral overlap.
Mining sequential patterns for protein fold recognition.
Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I
2008-02-01
Protein data contain discriminative patterns that can be used in many beneficial applications if they are defined correctly. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. Protein classification in terms of fold recognition plays an important role in computational protein analysis, since it can contribute to the determination of the function of a protein whose structure is unknown. Specifically, one of the most efficient SPM algorithms, cSPADE, is employed for the analysis of protein sequence. A classifier uses the extracted sequential patterns to classify proteins in the appropriate fold category. For training and evaluating the proposed method we used the protein sequences from the Protein Data Bank and the annotation of the SCOP database. The method exhibited an overall accuracy of 25% in a classification problem with 36 candidate categories. The classification performance reaches up to 56% when the five most probable protein folds are considered.
PC_Eyewitness: evaluating the New Jersey method.
MacLin, Otto H; Phelan, Colin M
2007-05-01
One important variable in eyewitness identification research is lineup administration procedure. Lineups administered sequentially (one at a time) have been shown to reduce the number of false identifications in comparison with those administered simultaneously (all at once). As a result, some policymakers have adopted sequential administration. However, they have made slight changes to the method used in psychology laboratories. Eyewitnesses in the field are allowed to take multiple passes through a lineup, whereas participants in the laboratory are allowed only one pass. PC_Eyewitness (PCE) is a computerized system used to construct and administer simultaneous or sequential lineups in both the laboratory and the field. It is currently being used in laboratories investigating eyewitness identification in the United States, Canada, and abroad. A modified version of PCE is also being developed for a local police department. We developed a new module for PCE, the New Jersey module, to examine the effects of a second pass. We found that the sequential advantage was eliminated when the participants were allowed to view the lineup a second time. The New Jersey module, and steps we are taking to improve on the module, are presented here and are being made available to the research and law enforcement communities.
Inoue, Tadahisa; Ishii, Norimitsu; Kobayashi, Yuji; Kitano, Rena; Sakamoto, Kazumasa; Ohashi, Tomohiko; Nakade, Yukiomi; Sumida, Yoshio; Ito, Kiyoaki; Nakao, Haruhisa; Yoneda, Masashi
2017-09-01
Endoscopic bilateral self-expandable metallic stent (SEMS) placement for malignant hilar biliary obstructions (MHBOs) is technically demanding, and a second SEMS insertion is particularly challenging. A simultaneous side-by-side (SBS) placement technique using a thinner delivery system may mitigate these issues. We aimed to examine the feasibility and efficacy of simultaneous SBS SEMS placement for treating MHBOs using a novel SEMS that has a 5.7-Fr ultra-thin delivery system. Thirty-four patients with MHBOs underwent SBS SEMS placement between 2010 and 2016. We divided the patient cohort into those who underwent sequential (conventional) SBS placement between 2010 and 2014 (sequential group) and those who underwent simultaneous SBS placement between 2015 and 2016 (simultaneous group), and compared the groups with respect to the clinical outcomes. The technical success rates were 71% (12/17) and 100% (17/17) in the sequential and simultaneous groups, respectively, a difference that was significant (P = .045). The median procedure time was significantly shorter in the simultaneous group (22 min) than in the sequential group (52 min) (P = .017). There were no significant group differences in the time to recurrent biliary obstruction (sequential group: 113 days; simultaneous group: 140 days) or other adverse event rates (sequential group: 12%; simultaneous group: 12%). Simultaneous SBS placement using the novel 5.7-Fr SEMS delivery system may be more straightforward and have a higher success rate compared to that with sequential SBS placement. This new method may be useful for bilateral stenting to treat MHBOs.
Automatic sequential fluid handling with multilayer microfluidic sample isolated pumping
Liu, Jixiao; Fu, Hai; Yang, Tianhang; Li, Songjing
2015-01-01
To sequentially handle fluids is of great significance in quantitative biology, analytical chemistry, and bioassays. However, the technological options are limited when building such microfluidic sequential processing systems, and one of the encountered challenges is the need for reliable, efficient, and mass-production available microfluidic pumping methods. Herein, we present a bubble-free and pumping-control unified liquid handling method that is compatible with large-scale manufacture, termed multilayer microfluidic sample isolated pumping (mμSIP). The core part of the mμSIP is the selective permeable membrane that isolates the fluidic layer from the pneumatic layer. The air diffusion from the fluidic channel network into the degassing pneumatic channel network leads to fluidic channel pressure variation, which further results in consistent bubble-free liquid pumping into the channels and the dead-end chambers. We characterize the mμSIP by comparing the fluidic actuation processes with different parameters and a flow rate range of 0.013 μl/s to 0.097 μl/s is observed in the experiments. As the proof of concept, we demonstrate an automatic sequential fluid handling system aiming at digital assays and immunoassays, which further proves the unified pumping-control and suggests that the mμSIP is suitable for functional microfluidic assays with minimal operations. We believe that the mμSIP technology and demonstrated automatic sequential fluid handling system would enrich the microfluidic toolbox and benefit further inventions. PMID:26487904
Article and method of forming an article
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacy, Benjamin Paul; Kottilingam, Srikanth Chandrudu; Dutta, Sandip
Provided are an article and a method of forming an article. The method includes providing a metallic powder, heating the metallic powder to a temperature sufficient to joint at least a portion of the metallic powder to form an initial layer, sequentially forming additional layers in a build direction by providing a distributed layer of the metallic powder over the initial layer and heating the distributed layer of the metallic powder, repeating the steps of sequentially forming the additional layers in the build direction to form a portion of the article having a hollow space formed in the build direction,more » and forming an overhang feature extending into the hollow space. The article includes an article formed by the method described herein.« less
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
Oka, Megan; Whiting, Jason
2013-01-01
In Marriage and Family Therapy (MFT), as in many clinical disciplines, concern surfaces about the clinician/researcher gap. This gap includes a lack of accessible, practical research for clinicians. MFT clinical research often borrows from the medical tradition of randomized control trials, which typically use linear methods, or follow procedures distanced from "real-world" therapy. We review traditional research methods and their use in MFT and propose increased use of methods that are more systemic in nature and more applicable to MFTs: process research, dyadic data analysis, and sequential analysis. We will review current research employing these methods, as well as suggestions and directions for further research. © 2013 American Association for Marriage and Family Therapy.
Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation
NASA Astrophysics Data System (ADS)
Tangaro, Sabina; Amoroso, Nicola; Brescia, Massimo; Cavuoti, Stefano; Chincarini, Andrea; Errico, Rosangela; Paolo, Inglese; Longo, Giuseppe; Maglietta, Rosalia; Tateo, Andrea; Riccio, Giuseppe; Bellotti, Roberto
2015-01-01
Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic resonance imaging (MRI) scans can show these variations and therefore can be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust, and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach; for each voxel a number of local features were calculated. In this paper, we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) sequential forward selection and (iii) sequential backward elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects. The resulting segmentations were compared with manual reference labelling. By using only 23 feature for each voxel (sequential backward elimination) we obtained comparable state-of-the-art performances with respect to the standard tool FreeSurfer.
Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E
2001-08-01
The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.
Characteristics of sequential targeting of brain glioma for transferrin-modified cisplatin liposome.
Lv, Qing; Li, Li-Min; Han, Min; Tang, Xin-Jiang; Yao, Jin-Na; Ying, Xiao-Ying; Li, Fan-Zhu; Gao, Jian-Qing
2013-02-28
Methods on how to improve the sequential targeting of glioma subsequent to passing of drug through the blood-brain barrier (BBB) have been occasionally reported. However, the characteristics involved are poorly understood. In the present study, cisplatin (Cis) liposome (lipo) was modified with transferrin (Tf) to investigate the characteristics of potential sequential targeting to glioma. In bEnd3/C6 co-culture BBB models, higher transport efficiency across the BBB and cytotoxicity in basal C6 cells induced by Cis-lipo(Tf) than Cis-lipo and Cis-solution, suggest its sequential targeting effect. Interestingly, similar liposomal morphology as that of donor compartment was first demonstrated in the receptor solution of BBB models. Meanwhile, a greater acquisition in the lysosome of bEnd3, distributed sequentially into the nucleus of C6 cells were found for the Cis-lipo(Tf). Pre-incubation of chlorpromazine and Tf inhibited this process, indicating that a clathrin-dependent endocytosis is involved in the transport of Cis-lipo(Tf) across the BBB. Copyright © 2013 Elsevier B.V. All rights reserved.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
On-Site Production of Cellulolytic Enzymes by the Sequential Cultivation Method.
Farinas, Cristiane S; Florencio, Camila; Badino, Alberto C
2018-01-01
The conversion of renewable lignocellulosic biomass into fuels, chemicals, and high-value materials using the biochemical platform has been considered the most sustainable alternative for the implementation of future biorefineries. However, the high cost of the cellulolytic enzymatic cocktails used in the saccharification step significantly affects the economics of industrial large-scale conversion processes. The on-site production of enzymes, integrated to the biorefinery plant, is being considered as a potential strategy that could be used to reduce costs. In such approach, the microbial production of enzymes can be carried out using the same lignocellulosic biomass as feedstock for fungal development and biofuels production. Most of the microbial cultivation processes for the production of industrial enzymes have been developed using the conventional submerged fermentation. Recently, a sequential solid-state followed by submerged fermentation has been described as a potential alternative cultivation method for cellulolytic enzymes production. This chapter presents the detailed procedure of the sequential cultivation method, which could be employed for the on-site production of the cellulolytic enzymes required to convert lignocellulosic biomass into simple sugars.
Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel
2017-01-01
Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Polat, Ahmet; Dogan, Soner; Demir, Selçuk Besir
2016-01-01
The present study was undertaken to investigate the quality of education based on the views of the students attending social studies education departments at the Faculties of Education and to determine the existing problems and present suggestions for their solutions. The study was conducted according to exploratory sequential mixed method. In…
ERIC Educational Resources Information Center
Polat, Ahmet; Dogan, Soner; Demir, Selçuk Besir
2016-01-01
The present study was undertaken to investigate the quality of education based on the views of the students attending social studies education departments at the Faculties of Education and to determine the existing problems and present suggestions for their solutions. The study was conducted according to exploratory sequential mixed method. In…
ERIC Educational Resources Information Center
Reeder, Ruth M.; Firszt, Jill B.; Holden, Laura K.; Strube, Michael J.
2014-01-01
Purpose: The purpose of this study was to examine the rate of progress in the 2nd implanted ear as it relates to the 1st implanted ear and to bilateral performance in adult sequential cochlear implant recipients. In addition, this study aimed to identify factors that contribute to patient outcomes. Method: The authors performed a prospective…
Kumar, B Senthil; Venkataramasubramanian, V; Sudalai, Arumugam
2012-05-18
A tandem reaction of in situ generated α-amino aldehydes with dimethyloxosulfonium methylide under Corey-Chaykovsky reaction conditions proceeds efficiently to give 4-hydroxypyrazolidine derivatives in high yields with excellent enantio- and diastereoselectivities. This organocatalytic sequential method provides for the efficient synthesis of anti-1,2-aminoalcohols, structural subunits present in several bioactive molecules as well.
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark
2014-01-01
Background. Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. Methods. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. Results. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. Conclusions. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine. PMID:25162031
[Sequential degradation of p-cresol by photochemical and biological methods].
Karetnikova, E A; Chaĭkovskaia, O N; Sokolova, I V; Nikitina, L I
2008-01-01
Sequential photo- and biodegradation of p-cresol was studied using a mercury lamp, as well as KrCl and XeCl excilamps. Preirradiation of p-cresol at a concentration of 10(-4) M did not affect the rate of its subsequent biodegradation. An increase in the concentration of p-cresol to 10(-3) M and in the duration preliminary UV irradiation inhibited subsequent biodegradation. Biodegradation of p-cresol was accompanied by the formation of a product with a fluorescence maximum at 365 nm (lambdaex 280 nm), and photodegradation yielded a compound fluorescing at 400 nm (lambdaex 330 nm). Sequential UV and biodegradation led to the appearance of bands in the fluorescence spectra that were ascribed to p-cresol and its photolysis products. It was shown that sequential use of biological and photochemical degradation results in degradation of not only the initial toxicant but also the metabolites formed during its biodegradation.
Iterative non-sequential protein structural alignment.
Salem, Saeed; Zaki, Mohammed J; Bystroff, Christopher
2009-06-01
Structural similarity between proteins gives us insights into their evolutionary relationships when there is low sequence similarity. In this paper, we present a novel approach called SNAP for non-sequential pair-wise structural alignment. Starting from an initial alignment, our approach iterates over a two-step process consisting of a superposition step and an alignment step, until convergence. We propose a novel greedy algorithm to construct both sequential and non-sequential alignments. The quality of SNAP alignments were assessed by comparing against the manually curated reference alignments in the challenging SISY and RIPC datasets. Moreover, when applied to a dataset of 4410 protein pairs selected from the CATH database, SNAP produced longer alignments with lower rmsd than several state-of-the-art alignment methods. Classification of folds using SNAP alignments was both highly sensitive and highly selective. The SNAP software along with the datasets are available online at http://www.cs.rpi.edu/~zaki/software/SNAP.
Multiple independent identification decisions: a method of calibrating eyewitness identifications.
Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul
2004-02-01
Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)
Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves
2012-01-01
To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.
Signorelli, Mauro; Lissoni, Andrea Alberto; De Ponti, Elena; Grassi, Tommaso; Ponti, Serena
2015-01-01
Objective Evaluation of the impact of sequential chemoradiotherapy in high risk endometrial cancer (EC). Methods Two hundred fifty-four women with stage IB grade 3, II and III EC (2009 FIGO staging), were included in this retrospective study. Results Stage I, II, and III was 24%, 28.7%, and 47.3%, respectively. Grade 3 tumor was 53.2% and 71.3% had deep myometrial invasion. One hundred sixty-five women (65%) underwent pelvic (+/- aortic) lymphadenectomy and 58 (22.8%) had nodal metastases. Ninety-eight women (38.6%) underwent radiotherapy, 59 (23.2%) chemotherapy, 42 (16.5%) sequential chemoradiotherapy, and 55 (21.7%) were only observed. After a median follow-up of 101 months, 78 women (30.7%) relapsed and 91 women (35.8%) died. Sequential chemoradiotherapy improved survival rates in women who did not undergo nodal evaluation (disease-free survival [DFS], p=0.040; overall survival [OS], p=0.024) or pelvic (+/- aortic) lymphadenectomy (DFS, p=0.008; OS, p=0.021). Sequential chemoradiotherapy improved both DFS (p=0.015) and OS (p=0.014) in stage III, while only a trend was found for DFS (p=0.210) and OS (p=0.102) in stage I-II EC. In the multivariate analysis, only age (≤65 years) and sequential chemoradiotherapy were statistically related to the prognosis. Conclusion Sequential chemoradiotherapy improves survival rates in high risk EC compared with chemotherapy or radiotherapy alone, in particular in stage III. PMID:26197768
van der Hoeven, Niels V; Lodestijn, Sophie; Nanninga, Stephanie; van Montfrans, Gert A; van den Born, Bert-Jan H
2013-11-01
There are currently few recommendations on how to assess inter-arm blood pressure (BP) differences. The authors compared simultaneous with sequential measurement on mean BP, inter-arm BP differences, and within-visit reproducibility in 240 patients stratified according to age (<50 or ≥60 years) and BP (<140/90 mm Hg or ≥140/90 mm Hg). Three simultaneous and three sequential BP measurements were taken in each patient. Starting measurement type and starting arm for sequential measurements were randomized. Mean BP and inter-arm BP differences of the first pair and reproducibility of inter-arm BP differences of the first and second pair were compared between both methods. Mean systolic BP was 1.3±7.5 mm Hg lower during sequential compared with simultaneous measurement (P<.01). However, the first sequential measurement was on average higher than the second, suggesting an order effect. Absolute systolic inter-arm BP differences were smaller on simultaneous (6.2±6.7/3.3±3.5 mm Hg) compared with sequential BP measurement (7.8±7.3/4.6±5.6 mm Hg, P<.01 for both). Within-visit reproducibility was identical (both r=0.60). Simultaneous measurement of BP at both arms reduces order effects and results in smaller inter-arm BP differences, thereby potentially reducing unnecessary referral and diagnostic procedures. ©2013 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mock, D.M.; DuBois, D.B.
1986-03-01
Interest in accurate measurement of biotin concentrations in plasma and urine has been stimulated by recent advances in the understanding of biotin-responsive inborn errors of metabolism and by several reports describing acquired biotin deficiency during parenteral alimentation. This paper presents a biotin assay utilizing radiolabeled avidin in a sequential, solid-phase method; the assay has increased sensitivity compared to previous methods (greater than or equal to 10 fmol/tube), correlates with expected trends in biotin concentrations in blood and urine in a rat model of biotin deficiency, and can utilize commercially available radiolabeled avidin.
Facile Determination of Sodium Ion and Osmolarity in Artificial Tears by Sequential DNAzymes.
Kim, Eun Hye; Lee, Eun-Song; Lee, Dong Yun; Kim, Young-Pil
2017-12-07
Despite high relevance of tear osmolarity and eye abnormality, numerous methods for detecting tear osmolarity rely upon expensive osmometers. We report a reliable method for simply determining sodium ion-based osmolarity in artificial tears using sequential DNAzymes. When sodium ion-specific DNAzyme and peroxidase-like DNAzyme were used as a sensing and detecting probe, respectively, the concentration of Na⁺ in artificial tears could be measured by absorbance or fluorescence intensity, which was highly correlated with osmolarity over the diagnostic range ( R ² > 0.98). Our approach is useful for studying eye diseases in relation to osmolarity.
Sequential Superresolution Imaging of Multiple Targets Using a Single Fluorophore
Lidke, Diane S.; Lidke, Keith A.
2015-01-01
Fluorescence superresolution (SR) microscopy, or fluorescence nanoscopy, provides nanometer scale detail of cellular structures and allows for imaging of biological processes at the molecular level. Specific SR imaging methods, such as localization-based imaging, rely on stochastic transitions between on (fluorescent) and off (dark) states of fluorophores. Imaging multiple cellular structures using multi-color imaging is complicated and limited by the differing properties of various organic dyes including their fluorescent state duty cycle, photons per switching event, number of fluorescent cycles before irreversible photobleaching, and overall sensitivity to buffer conditions. In addition, multiple color imaging requires consideration of multiple optical paths or chromatic aberration that can lead to differential aberrations that are important at the nanometer scale. Here, we report a method for sequential labeling and imaging that allows for SR imaging of multiple targets using a single fluorophore with negligible cross-talk between images. Using brightfield image correlation to register and overlay multiple image acquisitions with ~10 nm overlay precision in the x-y imaging plane, we have exploited the optimal properties of AlexaFluor647 for dSTORM to image four distinct cellular proteins. We also visualize the changes in co-localization of the epidermal growth factor (EGF) receptor and clathrin upon EGF addition that are consistent with clathrin-mediated endocytosis. These results are the first to demonstrate sequential SR (s-SR) imaging using direct stochastic reconstruction microscopy (dSTORM), and this method for sequential imaging can be applied to any superresolution technique. PMID:25860558
Collaborative Filtering Based on Sequential Extraction of User-Item Clusters
NASA Astrophysics Data System (ADS)
Honda, Katsuhiro; Notsu, Akira; Ichihashi, Hidetomo
Collaborative filtering is a computational realization of “word-of-mouth” in network community, in which the items prefered by “neighbors” are recommended. This paper proposes a new item-selection model for extracting user-item clusters from rectangular relation matrices, in which mutual relations between users and items are denoted in an alternative process of “liking or not”. A technique for sequential co-cluster extraction from rectangular relational data is given by combining the structural balancing-based user-item clustering method with sequential fuzzy cluster extraction appraoch. Then, the tecunique is applied to the collaborative filtering problem, in which some items may be shared by several user clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shusharina, N; Khan, F; Sharp, G
Purpose: To determine the dose level and timing of the boost in locally advanced lung cancer patients with confirmed tumor recurrence by comparing different boosting strategies by an impact of dose escalation in improvement of the therapeutic ratio. Methods: We selected eighteen patients with advanced NSCLC and confirmed recurrence. For each patient, a base IMRT plan to 60 Gy prescribed to PTV was created. Then we compared three dose escalation strategies: a uniform escalation to the original PTV, an escalation to a PET-defined target planned sequentially and concurrently. The PET-defined targets were delineated by biologically-weighed regions on a pre-treatment 18F-FDGmore » PET. The maximal achievable dose, without violating the OAR constraints, was identified for each boosting method. The EUD for the target, spinal cord, combined lung, and esophagus was compared for each plan. Results: The average prescribed dose was 70.4±13.9 Gy for the uniform boost, 88.5±15.9 Gy for the sequential boost and 89.1±16.5 Gy for concurrent boost. The size of the boost planning volume was 12.8% (range: 1.4 – 27.9%) of the PTV. The most prescription-limiting dose constraints was the V70 of the esophagus. The EUD within the target increased by 10.6 Gy for the uniform boost, by 31.4 Gy for the sequential boost and by 38.2 for the concurrent boost. The EUD for OARs increased by the following amounts: spinal cord, 3.1 Gy for uniform boost, 2.8 Gy for sequential boost, 5.8 Gy for concurrent boost; combined lung, 1.6 Gy for uniform, 1.1 Gy for sequential, 2.8 Gy for concurrent; esophagus, 4.2 Gy for uniform, 1.3 Gy for sequential, 5.6 Gy for concurrent. Conclusion: Dose escalation to a biologically-weighed gross tumor volume defined on a pre-treatment 18F-FDG PET may provide improved therapeutic ratio without breaching predefined OAR constraints. Sequential boost provides better sparing of OARs as compared with concurrent boost.« less
Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method
2015-01-05
rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Challenges in predicting climate change impacts on pome fruit phenology
NASA Astrophysics Data System (ADS)
Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.
2014-08-01
Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.
TDRSS-user orbit determination using batch least-squares and sequential methods
NASA Astrophysics Data System (ADS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-02-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.
Accurately controlled sequential self-folding structures by polystyrene film
NASA Astrophysics Data System (ADS)
Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse
2017-08-01
Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.
Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.
1991-01-01
The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.
TDRSS-user orbit determination using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.
Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods
NASA Astrophysics Data System (ADS)
Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.
1991-10-01
The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.
Reuse of imputed data in microarray analysis increases imputation efficiency
Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su
2004-01-01
Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240
Interpreting and Reporting Radiological Water-Quality Data
McCurdy, David E.; Garbarino, John R.; Mullin, Ann H.
2008-01-01
This document provides information to U.S. Geological Survey (USGS) Water Science Centers on interpreting and reporting radiological results for samples of environmental matrices, most notably water. The information provided is intended to be broadly useful throughout the United States, but it is recommended that scientists who work at sites containing radioactive hazardous wastes need to consult additional sources for more detailed information. The document is largely based on recognized national standards and guidance documents for radioanalytical sample processing, most notably the Multi-Agency Radiological Laboratory Analytical Protocols Manual (MARLAP), and on documents published by the U.S. Environmental Protection Agency and the American National Standards Institute. It does not include discussion of standard USGS practices including field quality-control sample analysis, interpretive report policies, and related issues, all of which shall always be included in any effort by the Water Science Centers. The use of 'shall' in this report signifies a policy requirement of the USGS Office of Water Quality.
A new bomb-combustion system for tritium extraction.
Marsh, Richard I; Croudace, Ian W; Warwick, Phillip E; Cooper, Natasha; St-Amant, Nadereh
2017-01-01
Quantitative extraction of tritium from a sample matrix is critical to efficient measurement of the low-energy pure beta emitter. Oxidative pyrolysis using a tube furnace (Pyrolyser) has been adopted as an industry standard approach for the liberation of tritium (Warwick et al. in Anal Chim Acta 676:93-102, 2010) however pyrolysis of organic-rich materials can be problematic. Practically, the mass of organic rich sample combusted is typically limited to <1 g to minimise the possibility of incomplete combustion. This can have an impact on both the limit of detection that can be achieved and how representative the subsample is of the bulk material, particularly in the case of heterogeneous soft waste. Raddec International Ltd (Southampton, UK), in conjunction with GAU-Radioanalytical, has developed a new high-capacity oxygen combustion bomb (the Hyperbaric Oxidiser; HBO 2 ) to address this challenge. The system is capable of quantitatively combusting samples of 20-30 g under an excess of oxygen, facilitating rapid extraction of total tritium from a wide range sample types.
Analysis of filter tuning techniques for sequential orbit determination
NASA Technical Reports Server (NTRS)
Lee, T.; Yee, C.; Oza, D.
1995-01-01
This paper examines filter tuning techniques for a sequential orbit determination (OD) covariance analysis. Recently, there has been a renewed interest in sequential OD, primarily due to the successful flight qualification of the Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) using Doppler data extracted onboard the Extreme Ultraviolet Explorer (EUVE) spacecraft. TONS computes highly accurate orbit solutions onboard the spacecraft in realtime using a sequential filter. As the result of the successful TONS-EUVE flight qualification experiment, the Earth Observing System (EOS) AM-1 Project has selected TONS as the prime navigation system. In addition, sequential OD methods can be used successfully for ground OD. Whether data are processed onboard or on the ground, a sequential OD procedure is generally favored over a batch technique when a realtime automated OD system is desired. Recently, OD covariance analyses were performed for the TONS-EUVE and TONS-EOS missions using the sequential processing options of the Orbit Determination Error Analysis System (ODEAS). ODEAS is the primary covariance analysis system used by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The results of these analyses revealed a high sensitivity of the OD solutions to the state process noise filter tuning parameters. The covariance analysis results show that the state estimate error contributions from measurement-related error sources, especially those due to the random noise and satellite-to-satellite ionospheric refraction correction errors, increase rapidly as the state process noise increases. These results prompted an in-depth investigation of the role of the filter tuning parameters in sequential OD covariance analysis. This paper analyzes how the spacecraft state estimate errors due to dynamic and measurement-related error sources are affected by the process noise level used. This information is then used to establish guidelines for determining optimal filter tuning parameters in a given sequential OD scenario for both covariance analysis and actual OD. Comparisons are also made with corresponding definitive OD results available from the TONS-EUVE analysis.
Simulations of 6-DOF Motion with a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2003-01-01
Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.
Ultra-Low Level Plutonium Isotopes in the NIST SRM 4355A (Peruvian Soil-1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inn, Kenneth G.; LaRosa, Jerome; Nour, Svetlana
2009-05-31
For more than 20 years, countries and their agencies which monitor discharge sites and storage facilities have relied on the National Institute of Standards and Technology (NIST) Standard Reference Material (SRM) 4355 Peruvian Soil reference material. Its low fallout contamination makes it an ideal soil blank for measurements associated with terrestrial pathway to man studies. Presently, SRM 4355 is out of stock, and a new batch of the Peruvian soil is currently under development as future NIST SRM 4355A. Both environmental radioanalytical laboratories and mass spectrometry communities will benefit from this SRM. The former must assess their laboratory contamination andmore » measurement detection limits by measurement of blank sample material. The Peruvian Soil is so low in anthropogenic radionuclides that it is a suitable virtual blank. On the other hand, mass spectrometric laboratories have high sensitivity instruments that are capable of quantitative isotopic measurements at low plutonium levels of the SRM 4355 (first Peruvian Soil SRM) that provided the mass spectrometric community with the calibration, quality control, and testing material needed for methods development, and legal defensibility. The quantification of the ultra-low plutonium content in the SRM 4355A was a considerable challenge for the mass spectrometric laboratories. Careful blank control and correction, isobaric interferences, instrument stability, peak assessment, and detection assessment were necessary. Furthermore, a systematic statistical evaluation of the measurement results and considerable discussions with the mass spectroscopy metrologists were needed to derive the certified values and uncertainties. SRM 4355A will provide the mass spectrometric community with the quality control and testing material needed for higher sensitivity methods development, and legal defensibility.« less
dos Santos, Luciana B O; Infante, Carlos M C; Masini, Jorge C
2010-03-01
This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 µL s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), µA) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): i(p) = (-20.5 ± 0.3)C (paraquat) - (0.02 ± 0.03). The limits of detection and quantification were 2.0 and 7.0 µg L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.
Shi, Ruijia; Xu, Cunshuan
2011-06-01
The study of rat proteins is an indispensable task in experimental medicine and drug development. The function of a rat protein is closely related to its subcellular location. Based on the above concept, we construct the benchmark rat proteins dataset and develop a combined approach for predicting the subcellular localization of rat proteins. From protein primary sequence, the multiple sequential features are obtained by using of discrete Fourier analysis, position conservation scoring function and increment of diversity, and these sequential features are selected as input parameters of the support vector machine. By the jackknife test, the overall success rate of prediction is 95.6% on the rat proteins dataset. Our method are performed on the apoptosis proteins dataset and the Gram-negative bacterial proteins dataset with the jackknife test, the overall success rates are 89.9% and 96.4%, respectively. The above results indicate that our proposed method is quite promising and may play a complementary role to the existing predictors in this area.
Sequential structural and optical evolution of MoS2 by chemical synthesis and exfoliation
NASA Astrophysics Data System (ADS)
Kim, Ju Hwan; Kim, Jungkil; Oh, Si Duck; Kim, Sung; Choi, Suk-Ho
2015-06-01
Various types of MoS2 structures are successfully obtained by using economical and facile sequential synthesis and exfoliation methods. Spherically-shaped lumps of multilayer (ML) MoS2 are prepared by using a conventional hydrothermal method and were subsequently 1st-exfoliated in hydrazine while being kept in autoclave to be unrolled and separated into five-to-six-layer MoS2 pieces of several-hundred nm in size. The MoS2 MLs are 2nd-exfoliated in sodium naphthalenide under an Ar ambient to finally produce bilayer MoS2 crystals of ~100 nm. The sequential exfoliation processes downsize MoS2 laterally and reduce its number of layers. The three types of MoS2 allotropes exhibit particular optical properties corresponding to their structural differences. These results suggest that two-dimensional MoS2 crystals can be prepared by employing only chemical techniques without starting from high-pressure-synthesized bulk MoS2 crystals.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
Phan, Duy The; Tan, Chung-Sung
2014-09-01
An innovative method for pretreatment of sugarcane bagasse using sequential combination of supercritical CO2 (scCO2) and alkaline hydrogen peroxide (H2O2) at mild conditions is proposed. This method was found to be superior to the individual pretreatment with scCO2, ultrasound, or H2O2 and the sequential combination of scCO2 and ultrasound regarding the yield of cellulose and hemicellulose, almost twice the yield was observed. Pretreatment with scCO2 could obtain higher amount of cellulose and hemicellulose but also acid-insoluble lignin. Pretreatment with ultrasound or H2O2 could partly depolymerize lignin, however, could not separate cellulose from lignin. The analysis of liquid products via enzymatic hydrolysis by HPLC and the characterization of the solid residues by SEM revealed strong synergetic effects in the sequential combination of scCO2 and H2O2. Copyright © 2014 Elsevier Ltd. All rights reserved.
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L
2014-11-01
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine) through four different methods of synthesis – bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of −39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising nanodrug delivery system for loading multiflavonoids in a single entity with sustained activity as an antioxidant, hepatoprotective, and hepatosupplement candidate. PMID:27555765
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising nanodrug delivery system for loading multiflavonoids in a single entity with sustained activity as an antioxidant, hepatoprotective, and hepatosupplement candidate.
A fast and accurate online sequential learning algorithm for feedforward networks.
Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N
2006-11-01
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.
Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M
2015-01-01
Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.
Method for sequentially processing a multi-level interconnect circuit in a vacuum chamber
NASA Technical Reports Server (NTRS)
Routh, D. E.; Sharma, G. C. (Inventor)
1982-01-01
The processing of wafer devices to form multilevel interconnects for microelectronic circuits is described. The method is directed to performing the sequential steps of etching the via, removing the photo resist pattern, back sputtering the entire wafer surface and depositing the next layer of interconnect material under common vacuum conditions without exposure to atmospheric conditions. Apparatus for performing the method includes a vacuum system having a vacuum chamber in which wafers are processed on rotating turntables. The vacuum chamber is provided with an RF sputtering system and a DC magnetron sputtering system. A gas inlet is provided in the chamber for the introduction of various gases to the vacuum chamber and the creation of various gas plasma during the sputtering steps.
Learning style and teaching method preferences of Saudi students of physical therapy
Al Maghraby, Mohamed A.; Alshami, Ali M.
2013-01-01
Context: To the researchers’ knowledge, there are no published studies that have investigated the learning styles and preferred teaching methods of physical therapy students in Saudi Arabia. Aim: The study was conducted to determine the learning styles and preferred teaching methods of Saudi physical therapy students. Settings and Design: A cross-sectional study design. Materials and Methods: Fifty-three Saudis studying physical therapy (21 males and 32 females) participated in the study. The principal researcher gave an introductory lecture to explain the different learning styles and common teaching methods. Upon completion of the lecture, questionnaires were distributed, and were collected on completion. Statistical Analysis Used: Percentages were calculated for the learning styles and teaching methods. Pearson’s correlations were performed to investigate the relationship between them. Results: More than 45 (85%) of the students rated hands-on training as the most preferred teaching method. Approximately 30 (57%) students rated the following teaching methods as the most preferred methods: “Advanced organizers,” “demonstrations,” and “multimedia activities.” Although 31 (59%) students rated the concrete-sequential learning style the most preferred, these students demonstrated mixed styles on the other style dimensions: Abstract-sequential, abstract-random, and concrete-random. Conclusions: The predominant concrete-sequential learning style is consistent with the most preferred teaching method (hands-on training). The high percentage of physical therapy students whose responses were indicative of mixed learning styles suggests that they can accommodate multiple teaching methods. It is recommended that educators consider the diverse learning styles of the students and utilize a variety of teaching methods in order to promote an optimal learning environment for the students. PMID:24672278
Bahnasy, Mahmoud F; Lucy, Charles A
2012-12-07
A sequential surfactant bilayer/diblock copolymer coating was previously developed for the separation of proteins. The coating is formed by flushing the capillary with the cationic surfactant dioctadecyldimethylammonium bromide (DODAB) followed by the neutral polymer poly-oxyethylene (POE) stearate. Herein we show the method development and optimization for capillary isoelectric focusing (cIEF) separations based on the developed sequential coating. Electroosmotic flow can be tuned by varying the POE chain length which allows optimization of resolution and analysis time. DODAB/POE 40 stearate can be used to perform single-step cIEF, while both DODAB/POE 40 and DODAB/POE 100 stearate allow performing two-step cIEF methodologies. A set of peptide markers is used to assess the coating performance. The sequential coating has been applied successfully to cIEF separations using different capillary lengths and inner diameters. A linear pH gradient is established only in two-step CIEF methodology using 3-10 pH 2.5% (v/v) carrier ampholyte. Hemoglobin A(0) and S variants are successfully resolved on DODAB/POE 40 stearate sequentially coated capillaries. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nystrøm, G. M.; Ottosen, L. M.; Villumsen, A.
2003-05-01
In this work sequential extraction is performed with harbour sediment in order to evaluate the electrodialytic remediation potential for harbour sediments. Sequential extraction was performed on a sample of Norwegian harbour sediment; with the original sediment and after the sediment was treated with acid. The results from the sequential extraction show that 75% Zn and Pb and about 50% Cu are found in the most mobile phases in the original sediment and more than 90% Zn and Pb and 75% Cu are found in the most mobile phase in the sediment treated with acid. Electrodialytic remediation experiments were made. The method uses a low direct current as cleaning agent, removing the heavy metals towards the anode and cathode according to the charge of the heavy metals in the electric field. The electrodialytic experiments show that up to 50% Cu, 85% Zn and 60% Pb can be removed after 20 days. Thus, there is still a potential for a higher removal, with some changes in the experimental set-up and longer remediation time. The experiments show that thc use of sequential extraction can be used to predict the electrodialytic remediation potential for harbour sediments.
Tai, Yiping; McBride, Murray B; Li, Zhian
2013-03-30
In the present study, we evaluated a commonly employed modified Bureau Communautaire de Référence (BCR test) 3-step sequential extraction procedure for its ability to distinguish forms of solid-phase Pb in soils with different sources and histories of contamination. When the modified BCR test was applied to mineral soils spiked with three forms of Pb (pyromorphite, hydrocerussite and nitrate salt), the added Pb was highly susceptible to dissolution in the operationally-defined "reducible" or "oxide" fraction regardless of form. When three different materials (mineral soil, organic soil and goethite) were spiked with soluble Pb nitrate, the BCR sequential extraction profiles revealed that soil organic matter was capable of retaining Pb in more stable and acid-resistant forms than silicate clay minerals or goethite. However, the BCR sequential extraction for field-collected soils with known and different sources of Pb contamination was not sufficiently discriminatory in the dissolution of soil Pb phases to allow soil Pb forms to be "fingerprinted" by this method. It is concluded that standard sequential extraction procedures are probably not very useful in predicting lability and bioavailability of Pb in contaminated soils. Copyright © 2013 Elsevier B.V. All rights reserved.
Learning Sequential Composition Control.
Najafi, Esmaeil; Babuska, Robert; Lopes, Gabriel A D
2016-11-01
Sequential composition is an effective supervisory control method for addressing control problems in nonlinear dynamical systems. It executes a set of controllers sequentially to achieve a control specification that cannot be realized by a single controller. As these controllers are designed offline, sequential composition cannot address unmodeled situations that might occur during runtime. This paper proposes a learning approach to augment the standard sequential composition framework by using online learning to handle unforeseen situations. New controllers are acquired via learning and added to the existing supervisory control structure. In the proposed setting, learning experiments are restricted to take place within the domain of attraction (DOA) of the existing controllers. This guarantees that the learning process is safe (i.e., the closed loop system is always stable). In addition, the DOA of the new learned controller is approximated after each learning trial. This keeps the learning process short as learning is terminated as soon as the DOA of the learned controller is sufficiently large. The proposed approach has been implemented on two nonlinear systems: 1) a nonlinear mass-damper system and 2) an inverted pendulum. The results show that in both cases a new controller can be rapidly learned and added to the supervisory control structure.
Harari, Colin M.; Magagna, Michelle; Bedoya, Mariajose; Lee, Fred T.; Lubner, Meghan G.; Hinshaw, J. Louis; Ziemlewicz, Timothy
2016-01-01
Purpose To compare microwave ablation zones created by using sequential or simultaneous power delivery in ex vivo and in vivo liver tissue. Materials and Methods All procedures were approved by the institutional animal care and use committee. Microwave ablations were performed in both ex vivo and in vivo liver models with a 2.45-GHz system capable of powering up to three antennas simultaneously. Two- and three-antenna arrays were evaluated in each model. Sequential and simultaneous ablations were created by delivering power (50 W ex vivo, 65 W in vivo) for 5 minutes per antenna (10 and 15 minutes total ablation time for sequential ablations, 5 minutes for simultaneous ablations). Thirty-two ablations were performed in ex vivo bovine livers (eight per group) and 28 in the livers of eight swine in vivo (seven per group). Ablation zone size and circularity metrics were determined from ablations excised postmortem. Mixed effects modeling was used to evaluate the influence of power delivery, number of antennas, and tissue type. Results On average, ablations created by using the simultaneous power delivery technique were larger than those with the sequential technique (P < .05). Simultaneous ablations were also more circular than sequential ablations (P = .0001). Larger and more circular ablations were achieved with three antennas compared with two antennas (P < .05). Ablations were generally smaller in vivo compared with ex vivo. Conclusion The use of multiple antennas and simultaneous power delivery creates larger, more confluent ablations with greater temperatures than those created with sequential power delivery. © RSNA, 2015 PMID:26133361
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
Che, W W; Frey, H Christopher; Lau, Alexis K H
2016-08-16
A sequential measurement method is demonstrated for quantifying the variability in exposure concentration during public transportation. This method was applied in Hong Kong by measuring PM2.5 and CO concentrations along a route connecting 13 transportation-related microenvironments within 3-4 h. The study design takes into account ventilation, proximity to local sources, area-wide air quality, and meteorological conditions. Portable instruments were compacted into a backpack to facilitate measurement under crowded transportation conditions and to quantify personal exposure by sampling at nose level. The route included stops next to three roadside monitors to enable comparison of fixed site and exposure concentrations. PM2.5 exposure concentrations were correlated with the roadside monitors, despite differences in averaging time, detection method, and sampling location. Although highly correlated in temporal trend, PM2.5 concentrations varied significantly among microenvironments, with mean concentration ratios versus roadside monitor ranging from 0.5 for MTR train to 1.3 for bus terminal. Measured inter-run variability provides insight regarding the sample size needed to discriminate between microenvironments with increased statistical significance. The study results illustrate the utility of sequential measurement of microenvironments and policy-relevant insights for exposure mitigation and management.
Wang, Chenyu; Liu, Wenwen; Tan, Manqing; Sun, Hongbo; Yu, Yude
2017-07-01
Cellular heterogeneity represents a fundamental principle of cell biology for which a readily available single-cell research tool is urgently required. Here, we present a novel method combining cell-sized well arrays with sequential inkjet printing. Briefly, K562 cells with phosphate buffer saline buffer were captured at high efficiency (74.5%) in a cell-sized well as a "primary droplet" and sealed using fluorinated oil. Then, piezoelectric inkjet printing technology was adapted to precisely inject the cell lysis buffer and the fluorogenic substrate, fluorescein-di-β-D-galactopyranoside, as a "secondary droplet" to penetrate the sealing oil and fuse with the "primary droplet." We thereby successfully measured the intracellular β-galactosidase activity of K562 cells at the single-cell level. Our method allows, for the first time, the ability to simultaneously accommodate the high occupancy rate of single cells and sequential addition of reagents while retaining an open structure. We believe that the feasibility and flexibility of our method will enhance its use as a universal single-cell research tool as well as accelerate the adoption of inkjet printing in the study of cellular heterogeneity.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
CACTI: free, open-source software for the sequential coding of behavioral interactions.
Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirley, C.; Pohlmann, K.; Andricevic, R.
1996-09-01
Geological and geophysical data are used with the sequential indicator simulation algorithm of Gomez-Hernandez and Srivastava to produce multiple, equiprobable, three-dimensional maps of informal hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site. The upper 50 percent of the Tertiary volcanic lithostratigraphic column comprises the study volume. Semivariograms are modeled from indicator-transformed geophysical tool signals. Each equiprobable study volume is subdivided into discrete classes using the ISIM3D implementation of the sequential indicator simulation algorithm. Hydraulic conductivity is assigned within each class using the sequential Gaussian simulation method of Deutsch and Journel. The resulting maps show the contiguitymore » of high and low hydraulic conductivity regions.« less
Chisvert, A; Salvador, A; Pascual-Martí, M C; March, J G
2001-04-01
Spectrophotometric determination of a widely used UV-filter, such as oxybenzone, is proposed. The method is based on the complexation reaction between oxybenzone and Ni(II) in ammoniacal medium. The stoichiometry of the reaction, established by the Job method, was 1:1. Reaction conditions were studied and the experimental parameters were optimized, for both flow injection (FI) and sequential injection (SI) determinations, with comparative purposes. Sunscreen formulations containing oxybenzone were analyzed by the proposed methods and results compared with those obtained by HPLC. Data show that both FI and SI procedures provide accurate and precise results. The ruggedness, sensitivity and LOD are adequate to the analysis requirements. The sample frequency obtained by FI is three-fold higher than that of SI analysis. SI is less reagent-consuming than FI.
Fang, Yili; Yin, Weizhao; Jiang, Yanbin; Ge, Hengjun; Li, Ping; Wu, Jinhua
2018-05-01
In this study, a sequential Fe 0 /H 2 O 2 reaction and biological process was employed as a low-cost depth treatment method to remove recalcitrant compounds from coal-chemical engineering wastewater after regular biological treatment. First of all, a chemical oxygen demand (COD) and color removal efficiency of 66 and 63% was achieved at initial pH of 6.8, 25 mmol L -1 of H 2 O 2 , and 2 g L -1 of Fe 0 in the Fe 0 /H 2 O 2 reaction. According to the gas chromatography-mass spectrometer (GC-MS) and gas chromatography-flame ionization detector (GC-FID) analysis, the recalcitrant compounds were effectively decomposed into short-chain organic acids such as acetic, propionic, and butyric acids. Although these acids were resistant to the Fe 0 /H 2 O 2 reaction, they were effectively eliminated in the sequential air lift reactor (ALR) at a hydraulic retention time (HRT) of 2 h, resulting in a further decrease of COD and color from 120 to 51 mg L -1 and from 70 to 38 times, respectively. A low operational cost of 0.35 $ m -3 was achieved because pH adjustment and iron-containing sludge disposal could be avoided since a total COD and color removal efficiency of 85 and 79% could be achieved at an original pH of 6.8 by the above sequential process with a ferric ion concentration below 0.8 mg L -1 after the Fe 0 /H 2 O 2 reaction. It indicated that the above sequential process is a promising and cost-effective method for the depth treatment of coal-chemical engineering wastewaters to satisfy discharge requirements.
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
Schneider, Francine; de Vries, Hein; van Osch, Liesbeth ADM; van Nierop, Peter WM; Kremers, Stef PJ
2012-01-01
Background Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). Objectives The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Methods Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Results Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Conclusion Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Trial Registration Dutch Trial Register NTR2168 PMID:22403770
2013-05-01
and diazepam with and without pretreatment with pyridostigmine bromide . The 24 hr median lethal dose (MLD) of VM was determined using a sequential... pyridostigmine bromide . The 24 hr median lethal dose (MLD) of VM was determined using a sequential stage approach. The efficacy of medical...with and without pyridostigmine bromide (PB) pretreatment against lethal intoxication with VM, VR or VX. Methods Animals: Adult male Hartley
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Scott D.; Liezers, Martin; Antolick, Kathryn C.
2013-06-13
In this study, we investigated several porous chromatographic materials as synthetic substrates for preparing surrogate nuclear explosion debris particles. The resulting synthetic debris materials are of interest for use in developing analytical methods. Eighteen metals, including some of forensic interest, were loaded onto materials by immersing them in metal solutions (556 mg/L of each metal) to fill the pores, applying gentle heat (110°C) to drive off water, and then treating them at high temperatures (up to 800°C) in air to form less soluble metal species. High-boiling-point metals were uniformly loaded on spherical controlled-pore glass to emulate early fallout, whereas low-boiling-pointmore » metals were loaded on core-shell silica to represent coated particles formed later in the nuclear fallout-formation process. Analytical studies were applied to characterize solubility, material balance, and formation of recalcitrant species. Dissolution experiments indicated loading was 1.5 to 3 times higher than expected from the pore volume alone, a result attributed to surface coating. Analysis of load solutions before and after filling the material pores revealed that most metals were passively loaded; that is, solutions filled the pores without active metal discrimination. However, niobium and tin concentrations were lower in solutions after pore filling, and were found in elevated concentrations in the final products, indicating some metals were selectively loaded. High-temperature treatments caused reduced solubility of several metal species, and loss of some metals (rhenium and tellurium) because volatile species were formed. Sample preparation reproducibility was high (the inter-batch relative standard deviation was 7.8%, and the intra-batch relative standard deviation was 0.84%) indicating that this material is suitable for use as a working standard for analytical methods development. We anticipate future standardized radionuclide-loaded materials will find use in radioanalytical methods development and/or serve as a starting material for the synthesis of more complex forms of nuclear explosion debris (e.g., Trinitite).« less
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
Sequential analysis in neonatal research-systematic review.
Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne
2018-05-01
As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).
Scaccianoce, Giuseppe; Hassan, Cesare; Panarese, Alba; Piglionica, Donato; Morini, Sergio; Zullo, Angelo
2006-01-01
BACKGROUND Helicobacter pylori eradication rates achieved by standard seven-day triple therapies are decreasing in several countries, while a novel 10-day sequential regimen has achieved a very high success rate. A longer 10-day triple therapy, similar to the sequential regimen, was tested to see whether it could achieve a better infection cure rate. METHODS Patients with nonulcer dyspepsia and H pylori infection were randomly assigned to one of the following three therapies: esomeprazole 20 mg, clarithromycin 500 mg and amoxycillin 1 g for seven days or 10 days, or a 10-day sequential regimen including esomeprazole 20 mg plus amoxycillin 1 g for five days and esomeprazole 20 mg, clarithromycin 500 mg and tinidazole 500 mg for the remaining five days. All drugs were given twice daily. H pylori eradication was checked four to six weeks after treatment by using a 13C-urea breath test. RESULTS Overall, 213 patients were enrolled. H pylori eradication was achieved in 75.7% and 77.9%, in 81.7% and 84.1%, and in 94.4% and 97.1% of patients following seven-day or 10-day triple therapy and the 10-day sequential regimen, at intention-to-treat and per protocol analyses, respectively. The eradication rate following the sequential regimen was higher than either seven-day (P=0.002) or 10-day triple therapy (P=0.02), while no significant difference emerged between the latter two regimens (P=0.6). CONCLUSIONS The 10-day sequential regimen was significantly more effective than both triple regimens, while 10-day triple therapy failed to significantly increase the H pylori eradication rate achieved by the standard seven-day regimen. PMID:16482238
Hemodynamic analysis of sequential graft from right coronary system to left coronary system.
Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun
2016-12-28
Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.
Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C
2012-01-01
To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merkel, K.D.; Brown, M.L.; Dewanjee, M.K.
We prospectively compared sequential technetium-gallium imaging with indium-labeled-leukocyte imaging in fifty patients with suspected low-grade musculoskeletal sepsis. Adequate images and follow-up examinations were obtained for forty-two patients. The presence or absence of low-grade sepsis was confirmed by histological and bacteriological examinations of tissue specimens taken at surgery in thirty of the forty-two patients. In these thirty patients, the sensitivity of sequential Tc-Ga imaging was 48 per cent, the specificity was 86 per cent, and the accuracy was 57 per cent, whereas the sensitivity of the indium-labeled-leukocyte technique was 83 per cent, the specificity was 86 per cent, and the accuracymore » was 83 per cent. When the additional twelve patients for whom surgery was deemed unnecessary were considered, the sensitivity of sequential Tc-Ga imaging was 50 per cent, the specificity was 78 per cent, and the accuracy was 62 per cent, as compared with a sensitivity of 83 per cent, a specificity of 94 per cent, and an accuracy of 88 per cent with the indium-labeled-leukocyte method. In patients with a prosthesis the indium-labeled-leukocyte image was 94 per cent accurate, compared with 75 per cent accuracy for sequential Tc-Ga imaging. Statistical analysis of these data demonstrated that the indium-labeled-leukocyte technique was superior to sequential Tc-Ga imaging in detecting areas of low-grade musculoskeletal sepsis.« less
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.
de Oliveira, Fabio Santos; Korn, Mauro
2006-01-15
A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.
Hattori, Yoshiyuki; Arai, Shohei; Okamoto, Ryou; Hamada, Megumi; Kawano, Kumi; Yonemochi, Etsuo
2014-12-10
In this study, we developed novel siRNA transfer method to the liver by sequential intravenous injection of anionic polymer and cationic liposome/cholesterol-modified siRNA complex (cationic lipoplex). When cationic lipoplex was intravenously injected into mice, the accumulation of siRNA was mainly observed in the lungs. In contrast, when cationic lipoplex was intravenously injected at 1 min after intravenous injection of poly-L-glutamic acid (PGA) or chondroitin sulfate C (CS), siRNA was accumulated in the liver. In terms of suppression of gene expression in vivo, apolipoprotein B (ApoB) mRNA in the liver and low-density-lipoprotein (LDL) and very low-density-lipoprotein (VLDL) cholesterol level in serum were reduced at 48 h after single sequential injection of PGA or CS plus cationic lipoplex of cholesterol-modified ApoB siRNA. Furthermore, sequential injections of PGA plus cationic lipoplex of cholesterol-modified luciferase siRNA could reduce luciferase activity in tumor xenografts bearing liver metastasis of human breast tumor MCF-7-Luc. From these findings, sequential injection of anionic polymer and cationic lipoplex of siRNA might produce a systemic vector of siRNA to the liver. Copyright © 2014 Elsevier B.V. All rights reserved.
Sequential growth for lifetime extension in biomimetic polypyrrole actuator systems
NASA Astrophysics Data System (ADS)
Sarrazin, J. C.; Mascaro, Stephen A.
2015-04-01
Electroactive polymers (EAPs) present prospective use in actuation and manipulation devices due to their low electrical activation requirements, biocompatibility, and mechanical performance. One of the main drawbacks with EAP actuators is a decrease in performance over extended periods of operation caused by over-oxidation of the polymer and general polymer degradation. Synthesis of the EAP material, polypyrrole with an embedded metal helix allows for sequential growth of the polymer during operation. The helical metal electrode acts as a scaffolding to support the polymer, and direct the 3-dimensional change in volume of the polymer along the axis of the helix during oxidative and reductive cycling. The metal helix also provides a working metal electrode through the entire length of the polymer actuator to distribute charge for actuation, as well as for sequential growth steps during the lifetime of operation of the polymer. This work demonstrates the method of sequential growth can be utilized after extended periods of use to partially restore electrical and mechanical performance of polypyrrole actuators. Since the actuation must be temporarily stopped to allow for a sequential growth cycle to be performed and reverse some of the polymer degradation, these actuator systems more closely mimic natural muscle in their analogous maintenance and repair.
Gaudrain, Etienne; Carlyon, Robert P
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.
Gaudrain, Etienne; Carlyon, Robert P.
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922
Gross, Megan; Buac, Milijana; Kaushanskaya, Margarita
2014-01-01
Purpose This study examined the effects of conceptual scoring on the performance of simultaneous and sequential bilinguals on standardized receptive and expressive vocabulary measures in English and Spanish. Method Participants included 40 English-speaking monolingual children, 39 simultaneous Spanish-English bilingual children, and 19 sequential bilinguals, ages 5–7. The children completed standardized receptive and expressive vocabulary measures in English and also in Spanish for bilinguals. After the standardized administration, bilinguals were given the opportunity to respond to missed items in their other language to obtain a conceptual score. Results Controlling for group differences in socioeconomic status (SES), both simultaneous and sequential bilinguals scored significantly below monolinguals on single-language measures of English receptive and expressive vocabulary. Conceptual scoring removed the significant difference between monolinguals and simultaneous bilinguals in the receptive modality, but not in the expressive modality; differences remained between monolinguals and sequential bilinguals in both modalities. However, in both bilingual groups conceptual scoring increased the proportion of children with vocabulary scores within the average range. Conclusions Conceptual scoring does not fully ameliorate the bias inherent in single-language standardized vocabulary measures for bilinguals, but the procedures employed here may assist in ruling out vocabulary deficits, particularly in typically-developing simultaneous bilingual children. PMID:24811415
NASA Astrophysics Data System (ADS)
Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan
2014-12-01
Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
Mining local climate data to assess spatiotemporal dengue fever epidemic patterns in French Guiana
Flamand, Claude; Fabregue, Mickael; Bringay, Sandra; Ardillon, Vanessa; Quénel, Philippe; Desenclos, Jean-Claude; Teisseire, Maguelonne
2014-01-01
Objective To identify local meteorological drivers of dengue fever in French Guiana, we applied an original data mining method to the available epidemiological and climatic data. Through this work, we also assessed the contribution of the data mining method to the understanding of factors associated with the dissemination of infectious diseases and their spatiotemporal spread. Methods We applied contextual sequential pattern extraction techniques to epidemiological and meteorological data to identify the most significant climatic factors for dengue fever, and we investigated the relevance of the extracted patterns for the early warning of dengue outbreaks in French Guiana. Results The maximum temperature, minimum relative humidity, global brilliance, and cumulative rainfall were identified as determinants of dengue outbreaks, and the precise intervals of their values and variations were quantified according to the epidemiologic context. The strongest significant correlations were observed between dengue incidence and meteorological drivers after a 4–6-week lag. Discussion We demonstrated the use of contextual sequential patterns to better understand the determinants of the spatiotemporal spread of dengue fever in French Guiana. Future work should integrate additional variables and explore the notion of neighborhood for extracting sequential patterns. Conclusions Dengue fever remains a major public health issue in French Guiana. The development of new methods to identify such specific characteristics becomes crucial in order to better understand and control spatiotemporal transmission. PMID:24549761
Astolfi, Maria Luisa; Di Filippo, Patrizia; Gentili, Alessandra; Canepari, Silvia
2017-11-01
We describe the optimization and validation of a sequential extractive method for the determination of the polycyclic aromatic hydrocarbons (PAHs) and elements (Al, As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Se, V and Zn) that are chemically fractionated into bio-accessible and mineralized residual fractions on a single particulate matter filter. The extraction is performed by automatic accelerated solvent extraction (ASE); samples are sequentially treated with dichloromethane/acetone (4:1) for PAHs extraction and acetate buffer (0.01M; pH 4.5) for elements extraction (bio-accessible fraction). The remaining solid sample is then collected and subjected to acid digestion with HNO 3 :H 2 O 2 (2:1) to determine the mineralized residual element fraction. We also describe a homemade ASE cell that reduces the blank values for most elements; in this cell, the steel frit was replaced by a Teflon pierced disk and a Teflon cylinder was used as the filler. The performance of the proposed method was evaluated in terms of recovery from standard reference material (SRM 1648 and SRM 1649a) and repeatability. The equivalence between the new ASE method and conventional methods was verified for PAHs and for bio-accessible and mineralized residual fractions of elements on PM 10 twin filters. Copyright © 2017 Elsevier B.V. All rights reserved.
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, W.; Prieto, F.J.
1993-05-01
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
NASA Astrophysics Data System (ADS)
Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik
2018-05-01
Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.
del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar
2010-06-15
A new concept of flow titration is proposed and demonstrated for the determination of total acidity in plant oils and biodiesel. We use sequential injection analysis (SIA) with a diode array spectrophotometric detector linked to chemometric tools such as multivariate curve resolution-alternating least squares (MCR-ALS). This system is based on the evolution of the basic specie of an acid-base indicator, alizarine, when it comes into contact with a sample that contains free fatty acids. The gradual pH change in the reactor coil due to diffusion and reaction phenomenona allows the sequential appearance of both species of the indicator in the detector coil, recording a data matrix for each sample. The SIA-MCR-ALS method helps to reduce the amounts of sample, the reagents and the time consumed. Each determination consumes 0.413ml of sample, 0.250ml of indicator and 3ml of carrier (ethanol) and generates 3.333ml of waste. The frequency of the analysis is high (12 samples h(-1) including all steps, i.e., cleaning, preparing and analysing). The utilized reagents are of common use in the laboratory and it is not necessary to use the reagents of perfect known concentration. The method was applied to determine acidity in plant oil and biodiesel samples. Results obtained by the proposed method compare well with those obtained by the official European Community method that is time consuming and uses large amounts of organic solvents.
NASA DOE POD NDE Capabilities Data Book
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.
Automated single-slide staining device
NASA Technical Reports Server (NTRS)
Wilkins, J. R.; Mills, S. M. (Inventor)
1977-01-01
A simple apparatus and method is disclosed for making individual single Gram stains on bacteria inoculated slides to assist in classifying bacteria in the laboratory as Gram-positive or Gram-negative. The apparatus involves positioning a single inoculated slide in a stationary position and thereafter automatically and sequentially flooding the slide with increments of a primary stain, a mordant, a decolorizer, a counterstain and a wash solution in a sequential manner without the individual lab technician touching the slide and with minimum danger of contamination thereof from other slides.
Some sequential, distribution-free pattern classification procedures with applications
NASA Technical Reports Server (NTRS)
Poage, J. L.
1971-01-01
Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.
Understanding Human Motion Skill with Peak Timing Synergy
NASA Astrophysics Data System (ADS)
Ueno, Ken; Furukawa, Koichi
The careful observation of motion phenomena is important in understanding the skillful human motion. However, this is a difficult task due to the complexities in timing when dealing with the skilful control of anatomical structures. To investigate the dexterity of human motion, we decided to concentrate on timing with respect to motion, and we have proposed a method to extract the peak timing synergy from multivariate motion data. The peak timing synergy is defined as a frequent ordered graph with time stamps, which has nodes consisting of turning points in motion waveforms. A proposed algorithm, PRESTO automatically extracts the peak timing synergy. PRESTO comprises the following 3 processes: (1) detecting peak sequences with polygonal approximation; (2) generating peak-event sequences; and (3) finding frequent peak-event sequences using a sequential pattern mining method, generalized sequential patterns (GSP). Here, we measured right arm motion during the task of cello bowing and prepared a data set of the right shoulder and arm motion. We successfully extracted the peak timing synergy on cello bowing data set using the PRESTO algorithm, which consisted of common skills among cellists and personal skill differences. To evaluate the sequential pattern mining algorithm GSP in PRESTO, we compared the peak timing synergy by using GSP algorithm and the one by using filtering by reciprocal voting (FRV) algorithm as a non time-series method. We found that the support is 95 - 100% in GSP, while 83 - 96% in FRV and that the results by GSP are better than the one by FRV in the reproducibility of human motion. Therefore we show that sequential pattern mining approach is more effective to extract the peak timing synergy than non-time series analysis approach.
Zhang, Lihua; Chen, Xianzhong; Chen, Zhen; Wang, Zezheng; Jiang, Shan; Li, Li; Pötter, Markus; Shen, Wei; Fan, You
2016-11-01
The diploid yeast Candida tropicalis, which can utilize n-alkane as a carbon and energy source, is an attractive strain for both physiological studies and practical applications. However, it presents some characteristics, such as rare codon usage, difficulty in sequential gene disruption, and inefficiency in foreign gene expression, that hamper strain improvement through genetic engineering. In this work, we present a simple and effective method for sequential gene disruption in C. tropicalis based on the use of an auxotrophic mutant host defective in orotidine monophosphate decarboxylase (URA3). The disruption cassette, which consists of a functional yeast URA3 gene flanked by a 0.3 kb gene disruption auxiliary sequence (gda) direct repeat derived from downstream or upstream of the URA3 gene and of homologous arms of the target gene, was constructed and introduced into the yeast genome by integrative transformation. Stable integrants were isolated by selection for Ura + and identified by PCR and sequencing. The important feature of this construct, which makes it very attractive, is that recombination between the flanking direct gda repeats occurs at a high frequency (10 -8 ) during mitosis. After excision of the URA3 marker, only one copy of the gda sequence remains at the recombinant locus. Thus, the resulting ura3 strain can be used again to disrupt a second allelic gene in a similar manner. In addition to this effective sequential gene disruption method, a codon-optimized green fluorescent protein-encoding gene (GFP) was functionally expressed in C. tropicalis. Thus, we propose a simple and reliable method to improve C. tropicalis by genetic manipulation.
Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum
USDA-ARS?s Scientific Manuscript database
We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...
Mixed-Methods Research Methodologies
ERIC Educational Resources Information Center
Terrell, Steven R.
2012-01-01
Mixed-Method studies have emerged from the paradigm wars between qualitative and quantitative research approaches to become a widely used mode of inquiry. Depending on choices made across four dimensions, mixed-methods can provide an investigator with many design choices which involve a range of sequential and concurrent strategies. Defining…
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Race and Older Mothers’ Differentiation: A Sequential Quantitative and Qualitative Analysis
Sechrist, Jori; Suitor, J. Jill; Riffin, Catherine; Taylor-Watson, Kadari; Pillemer, Karl
2011-01-01
The goal of this paper is to demonstrate a process by which qualitative and quantitative approaches are combined to reveal patterns in the data that are unlikely to be detected and confirmed by either method alone. Specifically, we take a sequential approach to combining qualitative and quantitative data to explore race differences in how mothers differentiate among their adult children. We began with a standard multivariate analysis examining race differences in mothers’ differentiation among their adult children regarding emotional closeness and confiding. Finding no race differences in this analysis, we conducted an in-depth comparison of the Black and White mothers’ narratives to determine whether there were underlying patterns that we had been unable to detect in our first analysis. Using this method, we found that Black mothers were substantially more likely than White mothers to emphasize interpersonal relationships within the family when describing differences among their children. In our final step, we developed a measure of familism based on the qualitative data and conducted a multivariate analysis to confirm the patterns revealed by the in-depth comparison of the mother’s narratives. We conclude that using such a sequential mixed methods approach to data analysis has the potential to shed new light on complex family relations. PMID:21967639
Development of a syringe pump assisted dynamic headspace sampling technique for needle trap device.
Eom, In-Yong; Niri, Vadoud H; Pawliszyn, Janusz
2008-07-04
This paper describes a new approach that combines needle trap devices (NTDs) with a dynamic headspace sampling technique (purge and trap) using a bidirectional syringe pump. The needle trap device is a 22-G stainless steel needle 3.5-in. long packed with divinylbenzene sorbent particles. The same sized needle, without packing, was used for purging purposes. We chose an aqueous mixture of benzene, toluene, ethylbenzene, and p-xylene (BTEX) and developed a sequential purge and trap (SPNT) method, in which sampling (trapping) and purging cycles were performed sequentially by the use of syringe pump with different distribution channels. In this technique, a certain volume (1 mL) of headspace was sequentially sampled using the needle trap; afterwards, the same volume of air was purged into the solution at a high flow rate. The proposed technique showed an effective extraction compared to the continuous purge and trap technique, with a minimal dilution effect. Method evaluation was also performed by obtaining the calibration graphs for aqueous BTEX solutions in the concentration range of 1-250 ng/mL. The developed technique was compared to the headspace solid-phase microextraction method for the analysis of aqueous BTEX samples. Detection limits as low as 1 ng/mL were obtained for BTEX by NTD-SPNT.
Manganese speciation of laboratory-generated welding fumes
Andrews, Ronnee N.; Keane, Michael; Hanley, Kevin W.; Feng, H. Amy; Ashley, Kevin
2015-01-01
The objective of this laboratory study was to identify and measure manganese (Mn) fractions in chamber-generated welding fumes (WF) and to evaluate and compare the results from a sequential extraction procedure for Mn fractions with that of an acid digestion procedure for measurement of total, elemental Mn. To prepare Mn-containing particulate matter from representative welding processes, a welding system was operated in short circuit gas metal arc welding (GMAW) mode using both stainless steel (SS) and mild carbon steel (MCS) and also with flux cored arc welding (FCAW) and shielded metal arc welding (SMAW) using MCS. Generated WF samples were collected onto polycarbonate filters before homogenization, weighing and storage in scintillation vials. The extraction procedure consisted of four sequential steps to measure various Mn fractions based upon selective solubility: (1) soluble Mn dissolved in 0.01 M ammonium acetate; (2) Mn (0,II) dissolved in 25 % (v/v) acetic acid; (3) Mn (III,IV) dissolved in 0.5% (w/v) hydroxylamine hydrochloride in 25% (v/v) acetic acid; and (4) insoluble Mn extracted with concentrated hydrochloric and nitric acids. After sample treatment, the four fractions were analyzed for Mn by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). WF from GMAW and FCAW showed similar distributions of Mn species, with the largest concentrations of Mn detected in the Mn (0,II) and insoluble Mn fractions. On the other hand, the majority of the Mn content of SMAW fume was detected as Mn (III,IV). Although the concentration of Mn measured from summation of the four sequential steps was statistically significantly different from that measured from the hot block dissolution method for total Mn, the difference is small enough to be of no practical importance for industrial hygiene air samples, and either method may be used for Mn measurement. The sequential extraction method provides valuable information about the oxidation state of Mn in samples and allows for comparison to results from previous work and from total Mn dissolution methods. PMID:26345630
Manganese speciation of laboratory-generated welding fumes.
Andrews, Ronnee N; Keane, Michael; Hanley, Kevin W; Feng, H Amy; Ashley, Kevin
The objective of this laboratory study was to identify and measure manganese (Mn) fractions in chamber-generated welding fumes (WF) and to evaluate and compare the results from a sequential extraction procedure for Mn fractions with that of an acid digestion procedure for measurement of total, elemental Mn. To prepare Mn-containing particulate matter from representative welding processes, a welding system was operated in short circuit gas metal arc welding (GMAW) mode using both stainless steel (SS) and mild carbon steel (MCS) and also with flux cored arc welding (FCAW) and shielded metal arc welding (SMAW) using MCS. Generated WF samples were collected onto polycarbonate filters before homogenization, weighing and storage in scintillation vials. The extraction procedure consisted of four sequential steps to measure various Mn fractions based upon selective solubility: (1) soluble Mn dissolved in 0.01 M ammonium acetate; (2) Mn (0,II) dissolved in 25 % (v/v) acetic acid; (3) Mn (III,IV) dissolved in 0.5% (w/v) hydroxylamine hydrochloride in 25% (v/v) acetic acid; and (4) insoluble Mn extracted with concentrated hydrochloric and nitric acids. After sample treatment, the four fractions were analyzed for Mn by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). WF from GMAW and FCAW showed similar distributions of Mn species, with the largest concentrations of Mn detected in the Mn (0,II) and insoluble Mn fractions. On the other hand, the majority of the Mn content of SMAW fume was detected as Mn (III,IV). Although the concentration of Mn measured from summation of the four sequential steps was statistically significantly different from that measured from the hot block dissolution method for total Mn, the difference is small enough to be of no practical importance for industrial hygiene air samples, and either method may be used for Mn measurement. The sequential extraction method provides valuable information about the oxidation state of Mn in samples and allows for comparison to results from previous work and from total Mn dissolution methods.
On mining complex sequential data by means of FCA and pattern structures
NASA Astrophysics Data System (ADS)
Buzmakov, Aleksey; Egho, Elias; Jay, Nicolas; Kuznetsov, Sergei O.; Napoli, Amedeo; Raïssi, Chedy
2016-02-01
Nowadays data-sets are available in very complex and heterogeneous ways. Mining of such data collections is essential to support many real-world applications ranging from healthcare to marketing. In this work, we focus on the analysis of "complex" sequential data by means of interesting sequential patterns. We approach the problem using the elegant mathematical framework of formal concept analysis and its extension based on "pattern structures". Pattern structures are used for mining complex data (such as sequences or graphs) and are based on a subsumption operation, which in our case is defined with respect to the partial order on sequences. We show how pattern structures along with projections (i.e. a data reduction of sequential structures) are able to enumerate more meaningful patterns and increase the computing efficiency of the approach. Finally, we show the applicability of the presented method for discovering and analysing interesting patient patterns from a French healthcare data-set on cancer. The quantitative and qualitative results (with annotations and analysis from a physician) are reported in this use-case which is the main motivation for this work.
Sequential ASE extraction of alkylphenols from sediments: Occurrence and environmental implications.
Gong, Jian; Xu, Lei; Yang, Yu; Chen, Di-Yun; Ran, Yong
2011-08-30
The occurrence of alkylphenols (APs) including nonylphenol (NP) and octylphenol (OP) in the riverine sediments from the Pearl River Delta (PRD), South China was investigated and compared by Soxhlet extraction (S-APs) with dichloromethane and by sequential accelerated solvent extraction (ASE) (A-APs) with 1:6 toluene/methanol, respectively. Concentrations of OP and NP range from <1 to 463ng/g dw and 31-21,885ng/g dw, respectively, demonstrating that the contamination level of APs in the PRD is one of the highest in the world. Moreover, the A-APs contents are highly significantly related to and on average 1.5 times the S-APs contents. For sequential two ASE extractions, APs in the first extract accounts for 82.2-99.2% of their total contents in the sequential two extractions. The correlation analysis shows that S-APs and A-APs are both significantly associated with the contents of total organic carbon (TOC), suggesting that the variable extraction efficiency of these two methods is related to the presence of condensed organic matter in the sediments. Copyright © 2011 Elsevier B.V. All rights reserved.
Treatment of mites folliculitis with an ornidazole-based sequential therapy
Luo, Yang; Sun, Yu-Jiao; Zhang, Li; Luan, Xiu-Li
2016-01-01
Abstract Objective: Treatment of Demodex infestations is often inadequate and associated with low effective rate. We sought to evaluate the efficacy of an ornidazole-based sequential therapy for mites folliculitis treatment. Methods: Two-hundred patients with mites folliculitis were sequentially treated with either an ornidazole- or metronidazole-based regimen. Sebum cutaneum was extruded from the sebaceous glands of each patient's nose and the presence of Demodex mites were examined by light microscopy. The clinical manifestations of relapse of mites folliculitis were recorded and the subjects were followed up at 2, 4, 8, and 12 weeks post-treatment. Results: Patients treated with the ornidazole-based regimen showed an overall effective rate of 94.0%. Additionally, at the 2, 4, 8, and 12-week follow-up, these patients had significantly lower rates of Demodex mite relapse and new lesion occurrence compared with patients treated with the metronidazole-based regimen (P < 0.05). Conclusion: Sequential therapy using ornidazole, betamethasone, and recombinant bovine basic fibroblast growth factor (rbFGF) gel is highly effective for treating mites folliculitis. PMID:27399141
Forecasting daily streamflow using online sequential extreme learning machines
NASA Astrophysics Data System (ADS)
Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.
2016-06-01
While nonlinear machine methods have been widely used in environmental forecasting, in situations where new data arrive continually, the need to make frequent model updates can become cumbersome and computationally costly. To alleviate this problem, an online sequential learning algorithm for single hidden layer feedforward neural networks - the online sequential extreme learning machine (OSELM) - is automatically updated inexpensively as new data arrive (and the new data can then be discarded). OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1-3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates. More frequent updating gave smaller forecast errors, including errors for data above the 90th percentile. Larger datasets used in the initial training of OSELM helped to find better parameters (number of hidden nodes) for the model, yielding better predictions. With the online sequential multiple linear regression (OSMLR) as benchmark, we concluded that OSELM is an attractive approach as it easily outperformed OSMLR in forecast accuracy.
CACTI: Free, Open-Source Software for the Sequential Coding of Behavioral Interactions
Glynn, Lisa H.; Hallgren, Kevin A.; Houck, Jon M.; Moyers, Theresa B.
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
A method was developed to simulate the human gastrointestinal environment and
to estimate bioavailability of arsenic in contaminated soil and solid media. In
this in vitro gastrointestinal (IVG) method, arsenic is sequentially extracted
from contaminated soil with ...
NASA Astrophysics Data System (ADS)
Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.
Cornaglia, Antonia Icaro; Faga, Angela; Scevola, Silvia
2014-01-01
Abstract Objective: An experimental study was conducted to assess the effectiveness and safety of an innovative quadripolar variable electrode configuration radiofrequency device with objective measurements in an ex vivo and in vivo human experimental model. Background data: Nonablative radiofrequency applications are well-established anti-ageing procedures for cosmetic skin tightening. Methods: The study was performed in two steps: ex vivo and in vivo assessments. In the ex vivo assessments the radiofrequency applications were performed on human full-thickness skin and subcutaneous tissue specimens harvested during surgery for body contouring. In the in vivo assessments the applications were performed on two volunteer patients scheduled for body contouring surgery at the end of the study. The assessment methods were: clinical examination and medical photography, temperature measurement with thermal imaging scan, and light microscopy histological examination. Results: The ex vivo assessments allowed for identification of the effective safety range for human application. The in vivo assessments allowed for demonstration of the biological effects of sequential radiofrequency applications. After a course of radiofrequency applications, the collagen fibers underwent an immediate heat-induced rearrangement and were partially denaturated and progressively metabolized by the macrophages. An overall thickening and spatial rearrangement was appreciated both in the collagen and elastic fibers, the latter displaying a juvenile reticular pattern. A late onset in the macrophage activation after sequential radiofrequency applications was appreciated. Conclusions: Our data confirm the effectiveness of sequential radiofrequency applications in obtaining attenuation of the skin wrinkles by an overall skin tightening. PMID:25244081
The bandwidth of consolidation into visual short-term memory (VSTM) depends on the visual feature
Miller, James R.; Becker, Mark W.; Liu, Taosheng
2014-01-01
We investigated the nature of the bandwidth limit in the consolidation of visual information into visual short-term memory. In the first two experiments, we examined whether previous results showing differential consolidation bandwidth for color and orientation resulted from methodological differences by testing the consolidation of color information with methods used in prior orientation experiments. We briefly presented two color patches with masks, either sequentially or simultaneously, followed by a location cue indicating the target. Participants identified the target color via button-press (Experiment 1) or by clicking a location on a color wheel (Experiment 2). Although these methods have previously demonstrated that two orientations are consolidated in a strictly serial fashion, here we found equivalent performance in the sequential and simultaneous conditions, suggesting that two colors can be consolidated in parallel. To investigate whether this difference resulted from different consolidation mechanisms or a common mechanism with different features consuming different amounts of bandwidth, Experiment 3 presented a color patch and an oriented grating either sequentially or simultaneously. We found a lower performance in the simultaneous than the sequential condition, with orientation showing a larger impairment than color. These results suggest that consolidation of both features share common mechanisms. However, it seems that color requires less information to be encoded than orientation. As a result two colors can be consolidated in parallel without exceeding the bandwidth limit, whereas two orientations or an orientation and a color exceed the bandwidth and appear to be consolidated serially. PMID:25317065
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.
Wenz, Holger; Maros, Máté E.; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O.; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas
2015-01-01
Objectives To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. Methods 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Results Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Conclusion Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels. PMID:26288186
Sequential Voluntary Cough and Aspiration or Aspiration Risk in Parkinson’s Disease
Hegland, Karen Wheeler; Okun, Michael S.; Troche, Michelle S.
2015-01-01
Background Disordered swallowing, or dysphagia, is almost always present to some degree in people with Parkinson’s disease (PD), either causing aspiration or greatly increasing the risk for aspiration during swallowing. This likely contributes to aspiration pneumonia, a leading cause of death in this patient population. Effective airway protection is dependent upon multiple behaviors, including cough and swallowing. Single voluntary cough function is disordered in people with PD and dysphagia. However, the appropriate response to aspirate material is more than one cough, or sequential cough. The goal of this study was to examine voluntary sequential coughing in people with PD, with and without dysphagia. Methods Forty adults diagnosed with idiopathic PD produced two trials of sequential voluntary cough. The cough airflows were obtained using pneumotachograph and facemask and subsequently digitized and recorded. All participants received a modified barium swallow study as part of their clinical care, and the worst penetration–aspiration score observed was used to determine whether the patient had dysphagia. Results There were significant differences in the compression phase duration, peak expiratory flow rates, and amount of air expired of the sequential cough produced by participants with and without dysphagia. Conclusions The presence of dysphagia in people with PD is associated with disordered cough function. Sequential cough, which is important in removing aspirate material from large- and smaller-diameter airways, is also impaired in people with PD and dysphagia compared with those without dysphagia. There may be common neuroanatomical substrates for cough and swallowing impairment in PD leading to the co-occurrence of these dysfunctions. PMID:24792231
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
The possibility of application of spiral brain computed tomography to traumatic brain injury.
Lim, Daesung; Lee, Soo Hoon; Kim, Dong Hoon; Choi, Dae Seub; Hong, Hoon Pyo; Kang, Changwoo; Jeong, Jin Hee; Kim, Seong Chun; Kang, Tae-Sin
2014-09-01
The spiral computed tomography (CT) with the advantage of low radiation dose, shorter test time required, and its multidimensional reconstruction is accepted as an essential diagnostic method for evaluating the degree of injury in severe trauma patients and establishment of therapeutic plans. However, conventional sequential CT is preferred for the evaluation of traumatic brain injury (TBI) over spiral CT due to image noise and artifact. We aimed to compare the diagnostic power of spiral facial CT for TBI to that of conventional sequential brain CT. We evaluated retrospectively the images of 315 traumatized patients who underwent both brain CT and facial CT simultaneously. The hemorrhagic traumatic brain injuries such as epidural hemorrhage, subdural hemorrhage, subarachnoid hemorrhage, and contusional hemorrhage were evaluated in both images. Statistics were performed using Cohen's κ to compare the agreement between 2 imaging modalities and sensitivity, specificity, positive predictive value, and negative predictive value of spiral facial CT to conventional sequential brain CT. Almost perfect agreement was noted regarding hemorrhagic traumatic brain injuries between spiral facial CT and conventional sequential brain CT (Cohen's κ coefficient, 0.912). To conventional sequential brain CT, sensitivity, specificity, positive predictive value, and negative predictive value of spiral facial CT were 92.2%, 98.1%, 95.9%, and 96.3%, respectively. In TBI, the diagnostic power of spiral facial CT was equal to that of conventional sequential brain CT. Therefore, expanded spiral facial CT covering whole frontal lobe can be applied to evaluate TBI in the future. Copyright © 2014 Elsevier Inc. All rights reserved.
2011-01-01
Background Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. Methods A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. Results The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Conclusions Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population. PMID:21888678
Engineering of Machine tool’s High-precision electric drives
NASA Astrophysics Data System (ADS)
Khayatov, E. S.; Korzhavin, M. E.; Naumovich, N. I.
2018-03-01
In the article it is shown that in mechanisms with numerical program control, high quality of processes can be achieved only in systems that provide adjustment of the working element’s position with high accuracy, and this requires an expansion of the regulation range by the torque. In particular, the use of synchronous reactive machines with independent excitation control makes it possible to substantially increase the moment overload in the sequential excitation circuit. Using mathematical and physical modeling methods, it is shown that in the electric drive with a synchronous reactive machine with independent excitation in a circuit with sequential excitation, it is possible to significantly expand the range of regulation by the torque and this is achieved by the effect of sequential excitation, which makes it possible to compensate for the transverse reaction of the armature.
Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines.
Biedka, Mathew M; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan
2017-01-06
Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency.
Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines
Biedka, Mathew M.; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan
2017-01-01
Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency. PMID:28059132
Hassan, Wafaa S; Elmasry, Manal S; Elsayed, Heba M; Zidan, Dalia W
2018-09-05
In accordance with International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines, six novel, simple and precise sequential spectrophotometric methods were developed and validated for the simultaneous analysis of Ribavirin (RIB), Sofosbuvir (SOF), and Daclatasvir (DAC) in their mixture without prior separation steps. These drugs are described as co-administered for treatment of Hepatitis C virus (HCV). HCV is the cause of hepatitis C and some cancers such as liver cancer (hepatocellular carcinoma) and lymphomas in humans. These techniques consisted of several sequential steps using zero, ratio and/or derivative spectra. DAC was first determined through direct spectrophotometry at 313.7 nm without any interference of the other two drugs while RIB and SOF can be determined after ratio subtraction through five methods; Ratio difference spectrophotometric method, successive derivative ratio method, constant center, isoabsorptive method at 238.8 nm, and mean centering of the ratio spectra (MCR) at 224 nm and 258 nm for RIB and SOF, respectively. The calibration curve is linear over the concentration ranges of (6-42), (10-70) and (4-16) μg/mL for RIB, SOF, and DAC, respectively. This method was successfully applied to commercial pharmaceutical preparation of the drugs, spiked human urine, and spiked human plasma. The above methods are very simple methods that were developed for the simultaneous determination of binary and ternary mixtures and so enhance signal-to-noise ratio. The method has been successfully applied to the simultaneous analysis of RIB, SOF, and DAC in laboratory prepared mixtures. The obtained results are statistically compared with those obtained by the official or reported methods, showing no significant difference with respect to accuracy and precision at p = 0.05. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
Sequential deconvolution from wave-front sensing using bivariate simplex splines
NASA Astrophysics Data System (ADS)
Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai
2015-05-01
Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.
Online sequential Monte Carlo smoother for partially observed diffusion processes
NASA Astrophysics Data System (ADS)
Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain
2018-12-01
This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.
Transport of North Pacific 137Cs labeled waters to the south-eastern Atlantic Ocean
NASA Astrophysics Data System (ADS)
Sanchez-Cabeza, J. A.; Levy, I.; Gastaud, J.; Eriksson, M.; Osvath, I.; Aoyama, M.; Povinec, P. P.; Komura, K.
2011-04-01
During the reoccupation of the WOCE transect A10 at 30°S by the BEAGLE2003 cruise, the SHOTS project partners collected a large number of samples for the analysis of isotopic tracers. 137Cs was mostly deposited on the oceans surface during the late 1950s and early 1960s, after the atmospheric detonation of large nuclear devices, which mostly occurred in the Northern Hemisphere. The development of advanced radioanalytical and counting techniques allowed to obtain, for the first time in this region, a zonal section of 137Cs water concentrations, where little information existed before, thus constituting an important benchmark for further studies. 137Cs concentrations in the upper waters (0-1000 m) of the south-eastern Atlantic Ocean are similar to those observed in the south-western Indian Ocean, suggesting transport of 137Cs labeled waters by the Agulhas current to the Benguela Current region. In contrast, bomb radiocarbon data do not show this feature, indicating the usefulness of 137Cs as a radiotracer of water mass transport from the Indian to the South Atlantic Ocean.
Nuclear Forensics and Attribution: A National Laboratory Perspective
NASA Astrophysics Data System (ADS)
Hall, Howard L.
2008-04-01
Current capabilities in technical nuclear forensics - the extraction of information from nuclear and/or radiological materials to support the attribution of a nuclear incident to material sources, transit routes, and ultimately perpetrator identity - derive largely from three sources: nuclear weapons testing and surveillance programs of the Cold War, advances in analytical chemistry and materials characterization techniques, and abilities to perform ``conventional'' forensics (e.g., fingerprints) on radiologically contaminated items. Leveraging that scientific infrastructure has provided a baseline capability to the nation, but we are only beginning to explore the scientific challenges that stand between today's capabilities and tomorrow's requirements. These scientific challenges include radically rethinking radioanalytical chemistry approaches, developing rapidly deployable sampling and analysis systems for field applications, and improving analytical instrumentation. Coupled with the ability to measure a signature faster or more exquisitely, we must also develop the ability to interpret those signatures for meaning. This requires understanding of the physics and chemistry of nuclear materials processes well beyond our current level - especially since we are unlikely to ever have direct access to all potential sources of nuclear threat materials.
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R.; Lambert, Scott R.
2015-01-01
OBJECTIVES To compare the incidence of adverse events, visual outcomes and economic costs of sequential versus simultaneous bilateral cataract surgery for infants with congenital cataracts. METHODS We retrospectively reviewed the incidence of adverse events, visual outcomes and medical payments associated with simultaneous versus sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months of age or younger at our institution. RESULTS Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (p=.25). We found a similar incidence of adverse events between the two treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean absolute interocular difference in logMAR visual acuities between the two treatment groups was 0.47±0.76 for the sequential group and 0.44±0.40 for the simultaneous group (p=.92). Hospital, drugs, supplies and professional payments were on average 21.9% lower per patient in the simultaneous group. CONCLUSIONS Simultaneous bilateral cataract surgery for infants with congenital cataracts was associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcome. PMID:20697007
Ochiai, Nobuo; Tsunokawa, Jun; Sasamoto, Kikuo; Hoffmann, Andreas
2014-12-05
A novel multi-volatile method (MVM) using sequential dynamic headspace (DHS) sampling for analysis of aroma compounds in aqueous sample was developed. The MVM consists of three different DHS method parameters sets including choice of the replaceable adsorbent trap. The first DHS sampling at 25 °C using a carbon-based adsorbent trap targets very volatile solutes with high vapor pressure (>20 kPa). The second DHS sampling at 25 °C using the same type of carbon-based adsorbent trap targets volatile solutes with moderate vapor pressure (1-20 kPa). The third DHS sampling using a Tenax TA trap at 80 °C targets solutes with low vapor pressure (<1 kPa) and/or hydrophilic characteristics. After the 3 sequential DHS samplings using the same HS vial, the three traps are sequentially desorbed with thermal desorption in reverse order of the DHS sampling and the desorbed compounds are trapped and concentrated in a programmed temperature vaporizing (PTV) inlet and subsequently analyzed in a single GC-MS run. Recoveries of the 21 test aroma compounds for each DHS sampling and the combined MVM procedure were evaluated as a function of vapor pressure in the range of 0.000088-120 kPa. The MVM provided very good recoveries in the range of 91-111%. The method showed good linearity (r2>0.9910) and high sensitivity (limit of detection: 1.0-7.5 ng mL(-1)) even with MS scan mode. The feasibility and benefit of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed coffee. Ten potent aroma compounds from top-note to base-note (acetaldehyde, 2,3-butanedione, 4-ethyl guaiacol, furaneol, guaiacol, 3-methyl butanal, 2,3-pentanedione, 2,3,5-trimethyl pyrazine, vanillin, and 4-vinyl guaiacol) could be identified together with an additional 72 aroma compounds. Thirty compounds including 9 potent aroma compounds were quantified in the range of 74-4300 ng mL(-1) (RSD<10%, n=5). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
New methods, algorithms, and software for rapid mapping of tree positions in coordinate forest plots
A. Dan Wilson
2000-01-01
The theories and methodologies for two new tree mapping methods, the Sequential-target method and the Plot-origin radial method, are described. The methods accommodate the use of any conventional distance measuring device and compass to collect horizontal distance and azimuth data between source or reference positions (origins) and target trees. Conversion equations...
Adaptation and Promotion of Emergency Medical Service Transportation for Climate Change
Pan, Chih-Long; Chiu, Chun-Wen; Wen, Jet-Chau
2014-01-01
Abstract The purpose of this study is to find a proper prehospital transportation scenario planning of an emergency medical service (EMS) system for possible burdensome casualties resulting from extreme climate events. This project focuses on one of the worst natural catastrophic events in Taiwan, the 88 Wind-caused Disasters, caused by the Typhoon Morakot; the case of the EMS transportation in the Xiaolin village is reviewed and analyzed. The sequential-conveyance method is designed to promote the efficiency of all the ambulance services related to transportation time and distance. Initially, a proposed mobile emergency medical center (MEMC) is constructed in a safe location near the area of the disaster. The ambulances are classified into 2 categories: the first-line ambulances, which reciprocate between the MEMC and the disaster area to save time and shorten the working distances and the second-line ambulances, which transfer patients in critical condition from the MEMC to the requested hospitals for further treatment. According to the results, the sequential-conveyance method is more efficient than the conventional method for EMS transportation in a mass-casualty incident (MCI). This method improves the time efficiency by 52.15% and the distance efficiency by 56.02%. This case study concentrates on Xiaolin, a mountain village, which was heavily destroyed by a devastating mudslide during the Typhoon Morakot. The sequential-conveyance method for the EMS transportation in this research is not only more advantageous but also more rational in adaptation to climate change. Therefore, the findings are also important to all the decision-making with respect to a promoted EMS transportation, especially in an MCI. PMID:25501065
NASA Astrophysics Data System (ADS)
Sahlstedt, Elina; Arppe, Laura
2017-04-01
Stable isotope composition of bones, analysed either from the mineral phase (hydroxyapatite) or from the organic phase (mainly collagen) carry important climatological and ecological information and are therefore widely used in paleontological and archaeological research. For the analysis of the stable isotope compositions, both of the phases, hydroxyapatite and collagen, have their more or less well established separation and analytical techniques. Recent development in IRMS and wet chemical extraction methods have facilitated the analysis of very small bone fractions (500 μg or less starting material) for PO43-O isotope composition. However, the uniqueness and (pre-) historical value of each archaeological and paleontological finding lead to preciously little material available for stable isotope analyses, encouraging further development of microanalytical methods for the use of stable isotope analyses. Here we present the first results in developing extraction methods for combining collagen C- and N-isotope analyses to PO43-O-isotope analyses from a single bone sample fraction. We tested sequential extraction starting with dilute acid demineralization and collection of both collagen and PO43-fractions, followed by further purification step by H2O2 (PO43-fraction). First results show that bone sample separates as small as 2 mg may be analysed for their δ15N, δ13C and δ18OPO4 values. The method may be incorporated in detailed investigation of sequentially developing skeletal material such as teeth, potentially allowing for the investigation of interannual variability in climatological/environmental signals or investigation of the early life history of an individual.
Adaptation and promotion of emergency medical service transportation for climate change.
Pan, Chih-Long; Chiu, Chun-Wen; Wen, Jet-Chau
2014-12-01
The purpose of this study is to find a proper prehospital transportation scenario planning of an emergency medical service (EMS) system for possible burdensome casualties resulting from extreme climate events. This project focuses on one of the worst natural catastrophic events in Taiwan, the 88 Wind-caused Disasters, caused by the Typhoon Morakot; the case of the EMS transportation in the Xiaolin village is reviewed and analyzed. The sequential-conveyance method is designed to promote the efficiency of all the ambulance services related to transportation time and distance. Initially, a proposed mobile emergency medical center (MEMC) is constructed in a safe location near the area of the disaster. The ambulances are classified into 2 categories: the first-line ambulances, which reciprocate between the MEMC and the disaster area to save time and shorten the working distances and the second-line ambulances, which transfer patients in critical condition from the MEMC to the requested hospitals for further treatment. According to the results, the sequential-conveyance method is more efficient than the conventional method for EMS transportation in a mass-casualty incident (MCI). This method improves the time efficiency by 52.15% and the distance efficiency by 56.02%. This case study concentrates on Xiaolin, a mountain village, which was heavily destroyed by a devastating mudslide during the Typhoon Morakot. The sequential-conveyance method for the EMS transportation in this research is not only more advantageous but also more rational in adaptation to climate change. Therefore, the findings are also important to all the decision-making with respect to a promoted EMS transportation, especially in an MCI.
Lemons, B; Khaing, H; Ward, A; Thakur, P
2018-06-01
A new sequential separation method for the determination of polonium and actinides (Pu, Am and U) in drinking water samples has been developed that can be used for emergency response or routine water analyses. For the first time, the application of TEVA chromatography column in the sequential separation of polonium and plutonium has been studied. This method utilizes a rapid Fe +3 co-precipitation step to remove matrix interferences, followed by plutonium oxidation state adjustment to Pu 4+ and an incubation period of ~ 1 h at 50-60 °C to allow Po 2+ to oxidize to Po 4+ . The polonium and plutonium were then separated on a TEVA column, while separation of americium from uranium was performed on a TRU column. After separation, polonium was micro-precipitated with copper sulfide (CuS), while actinides were micro co-precipitated using neodymium fluoride (NdF 3 ) for counting by the alpha spectrometry. The method is simple, robust and can be performed quickly with excellent removal of interferences, high chemical recovery and very good alpha peak resolution. The efficiency and reliability of the procedures were tested by using spiked samples. The effect of several transition metals (Cu 2+ , Pb 2+ , Fe 3+ , Fe 2+ , and Ni 2+ ) on the performance of this method were also assessed to evaluate the potential matrix effects. Studies indicate that presence of up to 25 mg of these cations in the samples had no adverse effect on the recovery or the resolution of polonium alpha peaks. Copyright © 2018 Elsevier Ltd. All rights reserved.
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2010 CFR
2010-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2011 CFR
2011-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM−2.5.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
Bansal, A.; Kapoor, R.; Singh, S. K.; Kumar, N.; Oinam, A. S.; Sharma, S. C.
2012-01-01
Aims: Dosimeteric and radiobiological comparison of two radiation schedules in localized carcinoma prostate: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Material and Methods: Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose–volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. Results: The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. Conclusions: For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT. PMID:23204659
Rawat, Varun; Kumar, B Senthil; Sudalai, Arumugam
2013-06-14
A new sequential organocatalytic method for the synthesis of chiral 3-substituted (X = OH, NH2) tetrahydroquinoline derivatives (THQs) [ee up to 99%, yield up to 87%] based on α-aminooxylation or -amination followed by reductive cyclization of o-nitrohydrocinnamaldehydes has been described. This methodology has been efficiently demonstrated in the synthesis of two important bioactive molecules namely (-)-sumanirole (96% ee) and 1-[(S)-3-(dimethylamino)-3,4-dihydro-6,7-dimethoxy-quinolin-1(2H)-yl]propanone (92% ee).
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Meyer, Marjolaine D; Terry, Leon A
2008-08-27
Methods devised for oil extraction from avocado (Persea americana Mill.) mesocarp (e.g., Soxhlet) are usually lengthy and require operation at high temperature. Moreover, methods for extracting sugars from avocado tissue (e.g., 80% ethanol, v/v) do not allow for lipids to be easily measured from the same sample. This study describes a new simple method that enabled sequential extraction and subsequent quantification of both fatty acids and sugars from the same avocado mesocarp tissue sample. Freeze-dried mesocarp samples of avocado cv. Hass fruit of different ripening stages were extracted by homogenization with hexane and the oil extracts quantified for fatty acid composition by GC. The resulting filter residues were readily usable for sugar extraction with methanol (62.5%, v/v). For comparison, oil was also extracted using the standard Soxhlet technique and the resulting thimble residue extracted for sugars as before. An additional experiment was carried out whereby filter residues were also extracted using ethanol. Average oil yield using the Soxhlet technique was significantly (P < 0.05) higher than that obtained by homogenization with hexane, although the difference remained very slight, and fatty acid profiles of the oil extracts following both methods were very similar. Oil recovery improved with increasing ripeness of the fruit with minor differences observed in the fatty acid composition during postharvest ripening. After lipid removal, methanolic extraction was superior in recovering sucrose and perseitol as compared to 80% ethanol (v/v), whereas mannoheptulose recovery was not affected by solvent used. The method presented has the benefits of shorter extraction time, lower extraction temperature, and reduced amount of solvent and can be used for sequential extraction of fatty acids and sugars from the same sample.
[Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.
Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui
2018-05-01
The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.
Yang, Qiang; Ma, Yanling; Zhao, Yongxue; She, Zhennan; Wang, Long; Li, Jie; Wang, Chunling; Deng, Yihui
2013-01-01
Background Sequential low-dose chemotherapy has received great attention for its unique advantages in attenuating multidrug resistance of tumor cells. Nevertheless, it runs the risk of producing new problems associated with the accelerated blood clearance phenomenon, especially with multiple injections of PEGylated liposomes. Methods Liposomes were labeled with fluorescent phospholipids of 1,2-dipalmitoyl-snglycero-3-phosphoethanolamine-N-(7-nitro-2-1,3-benzoxadiazol-4-yl) and epirubicin (EPI). The pharmacokinetics profile and biodistribution of the drug and liposome carrier following multiple injections were determined. Meanwhile, the antitumor effect of sequential low-dose chemotherapy was tested. To clarify this unexpected phenomenon, the production of polyethylene glycol (PEG)-specific immunoglobulin M (IgM), drug release, and residual complement activity experiments were conducted in serum. Results The first or sequential injections of PEGylated liposomes within a certain dose range induced the rapid clearance of subsequently injected PEGylated liposomal EPI. Of note, the clearance of EPI was two- to three-fold faster than the liposome itself, and a large amount of EPI was released from liposomes in the first 30 minutes in a complement-activation, direct-dependent manner. The therapeutic efficacy of liposomal EPI following 10 days of sequential injections in S180 tumor-bearing mice of 0.75 mg EPI/kg body weight was almost completely abolished between the sixth and tenth day of the sequential injections, even although the subsequently injected doses were doubled. The level of PEG-specific IgM in the blood increased rapidly, with a larger amount of complement being activated while the concentration of EPI in blood and tumor tissue was significantly reduced. Conclusion Our investigation implied that the accelerated blood clearance phenomenon and its accompanying rapid leakage and clearance of drug following sequential low-dose injections may reverse the unique pharmacokinetic–toxicity profile of liposomes which deserved our attention. Therefore, a more reasonable treatment regime should be selected to lessen or even eliminate this phenomenon. PMID:23576868
Article and method for making an article
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacy, Benjamin Paul; Schick, David Edward; Kottilingam, Srikanth Chandrudu
An article and a method for making shaped cooling holes in an article are provided. The method includes the steps of depositing a metal alloy powder to form an initial layer including at least one aperture, melting the metal alloy powder with a focused energy source to transform the powder layer to a sheet of metal alloy, sequentially depositing an additional layer of the metal alloy powder to form a layer including at least one aperture corresponding to the at least one aperture in the initial layer, melting the additional layer of the metal alloy powder with the focused energymore » source to increase the sheet thickness, and repeating the steps of sequentially depositing and melting the additional layers of metal alloy powder until a structure including at least one aperture having a predetermined profile is obtained. The structure is attached to a substrate to make the article.« less
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
Hu, B.X.; He, C.
2008-01-01
An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.
Abraham, Joanna; Kannampallil, Thomas; Brenner, Corinne; Lopez, Karen D; Almoosa, Khalid F; Patel, Bela; Patel, Vimla L
2016-02-01
Effective communication during nurse handoffs is instrumental in ensuring safe and quality patient care. Much of the prior research on nurse handoffs has utilized retrospective methods such as interviews, surveys and questionnaires. While extremely useful, an in-depth understanding of the structure and content of conversations, and the inherent relationships within the content is paramount to designing effective nurse handoff interventions. In this paper, we present a methodological framework-Sequential Conversational Analysis (SCA)-a mixed-method approach that integrates qualitative conversational analysis with quantitative sequential pattern analysis. We describe the SCA approach and provide a detailed example as a proof of concept of its use for the analysis of nurse handoff communication in a medical intensive care unit. This novel approach allows us to characterize the conversational structure, clinical content, disruptions in the conversation, and the inherently phasic nature of nurse handoff communication. The characterization of communication patterns highlights the relationships underlying the verbal content of nurse handoffs with specific emphasis on: the interactive nature of conversation, relevance of role-based (incoming, outgoing) communication requirements, clinical content focus on critical patient-related events, and discussion of pending patient management tasks. We also discuss the applicability of the SCA approach as a method for providing in-depth understanding of the dynamics of communication in other settings and domains. Copyright © 2015 Elsevier Inc. All rights reserved.
Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.
2003-01-01
Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688
Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M
2003-10-01
Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.
How to Compress Sequential Memory Patterns into Periodic Oscillations: General Reduction Rules
Zhang, Kechen
2017-01-01
A neural network with symmetric reciprocal connections always admits a Lyapunov function, whose minima correspond to the memory states stored in the network. Networks with suitable asymmetric connections can store and retrieve a sequence of memory patterns, but the dynamics of these networks cannot be characterized as readily as that of the symmetric networks due to the lack of established general methods. Here, a reduction method is developed for a class of asymmetric attractor networks that store sequences of activity patterns as associative memories, as in a Hopfield network. The method projects the original activity pattern of the network to a low-dimensional space such that sequential memory retrievals in the original network correspond to periodic oscillations in the reduced system. The reduced system is self-contained and provides quantitative information about the stability and speed of sequential memory retrievals in the original network. The time evolution of the overlaps between the network state and the stored memory patterns can also be determined from extended reduced systems. The reduction procedure can be summarized by a few reduction rules, which are applied to several network models, including coupled networks and networks with time-delayed connections, and the analytical solutions of the reduced systems are confirmed by numerical simulations of the original networks. Finally, a local learning rule that provides an approximation to the connection weights involving the pseudoinverse is also presented. PMID:24877729
Five methods were used for the extraction of hexachlorobutadiene and chlorobenzenes from a contaminated estuarine sediment. The following extraction methods were used: Soxhlet extraction, sonication and solvent extraction, sequential solvent extraction, saponification and solv...
Protocol Analysis as a Tool in Function and Task Analysis
1999-10-01
Autocontingency The use of log-linear and logistic regression methods to analyse sequential data seems appealing , and is strongly advocated by...collection and analysis of observational data. Behavior Research Methods, Instruments, and Computers, 23(3), 415-429. Patrick, J. D. (1991). Snob : A
2004-03-01
Narratives Phenomenologies Ethnographies Grounded Theory Case Studies Mixed Methods Sequential Concurrent Transformative Creswell... ethnographies , grounded theory studies and case studies (Creswell, 2003:18). The methods used in qualitative study provide the framework for...Definition Grounded theory provides a structured
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.
Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen
2015-01-01
Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.
Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi
2016-02-01
The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Sequential Versus Concurrent Trastuzumab in Adjuvant Chemotherapy for Breast Cancer
Perez, Edith A.; Suman, Vera J.; Davidson, Nancy E.; Gralow, Julie R.; Kaufman, Peter A.; Visscher, Daniel W.; Chen, Beiyun; Ingle, James N.; Dakhil, Shaker R.; Zujewski, JoAnne; Moreno-Aspitia, Alvaro; Pisansky, Thomas M.; Jenkins, Robert B.
2011-01-01
Purpose NCCTG (North Central Cancer Treatment Group) N9831 is the only randomized phase III trial evaluating trastuzumab added sequentially or used concurrently with chemotherapy in resected stages I to III invasive human epidermal growth factor receptor 2–positive breast cancer. Patients and Methods Patients received doxorubicin and cyclophosphamide every 3 weeks for four cycles, followed by paclitaxel weekly for 12 weeks (arm A), paclitaxel plus sequential trastuzumab weekly for 52 weeks (arm B), or paclitaxel plus concurrent trastuzumab for 12 weeks followed by trastuzumab for 40 weeks (arm C). The primary end point was disease-free survival (DFS). Results Comparison of arm A (n = 1,087) and arm B (n = 1,097), with 6-year median follow-up and 390 events, revealed 5-year DFS rates of 71.8% and 80.1%, respectively. DFS was significantly increased with trastuzumab added sequentially to paclitaxel (log-rank P < .001; arm B/arm A hazard ratio [HR], 0.69; 95% CI, 0.57 to 0.85). Comparison of arm B (n = 954) and arm C (n = 949), with 6-year median follow-up and 313 events, revealed 5-year DFS rates of 80.1% and 84.4%, respectively. There was an increase in DFS with concurrent trastuzumab and paclitaxel relative to sequential administration (arm C/arm B HR, 0.77; 99.9% CI, 0.53 to 1.11), but the P value (.02) did not cross the prespecified O'Brien-Fleming boundary (.00116) for the interim analysis. Conclusion DFS was significantly improved with 52 weeks of trastuzumab added to adjuvant chemotherapy. On the basis of a positive risk-benefit ratio, we recommend that trastuzumab be incorporated into a concurrent regimen with taxane chemotherapy as an important standard-of-care treatment alternative to a sequential regimen. PMID:22042958
NASA Astrophysics Data System (ADS)
Adams, Daniel L.; Alpaugh, R. Katherine; Tsai, Susan; Tang, Cha-Mei; Stefansson, Steingrimur
2016-09-01
In tissue biopsies formalin fixed paraffin embedded cancer blocks are micro-sectioned producing multiple semi-identical specimens which are analyzed and subtyped proteomically, and genomically, with numerous biomarkers. In blood based biopsies (BBBs), blood is purified for circulating tumor cells (CTCs) and clinical utility is typically limited to cell enumeration, as only 2-3 positive fluorescent markers and 1 negative marker can be used. As such, increasing the number of subtyping biomarkers on each individual CTC could dramatically enhance the clinical utility of BBBs, allowing in depth interrogation of clinically relevant CTCs. We describe a simple and inexpensive method for quenching the specific fluors of fluorescently stained CTCs followed by sequential restaining with additional biomarkers. As proof of principle a CTC panel, immunosuppression panel and stem cell panel were used to sequentially subtype individual fluorescently stained patient CTCs, suggesting a simple and universal technique to analyze multiple clinically applicable immunomarkers from BBBs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padbury, Richard P.; Jur, Jesse S., E-mail: jsjur@ncsu.edu
Previous research exploring inorganic materials nucleation behavior on polymers via atomic layer deposition indicates the formation of hybrid organic–inorganic materials that form within the subsurface of the polymer. This has inspired adaptations to the process, such as sequential vapor infiltration, which enhances the diffusion of organometallic precursors into the subsurface of the polymer to promote the formation of a hybrid organic–inorganic coating. This work highlights the fundamental difference in mass uptake behavior between atomic layer deposition and sequential vapor infiltration using in-situ methods. In particular, in-situ quartz crystal microgravimetry is used to compare the mass uptake behavior of trimethyl aluminummore » in poly(butylene terephthalate) and polyamide-6 polymer thin films. The importance of trimethyl aluminum diffusion into the polymer subsurface and the subsequent chemical reactions with polymer functional groups are discussed.« less
Sequential Injection Analysis for Optimization of Molecular Biology Reactions
Allen, Peter B.; Ellington, Andrew D.
2011-01-01
In order to automate the optimization of complex biochemical and molecular biology reactions, we developed a Sequential Injection Analysis (SIA) device and combined this with a Design of Experiment (DOE) algorithm. This combination of hardware and software automatically explores the parameter space of the reaction and provides continuous feedback for optimizing reaction conditions. As an example, we optimized the endonuclease digest of a fluorogenic substrate, and showed that the optimized reaction conditions also applied to the digest of the substrate outside of the device, and to the digest of a plasmid. The sequential technique quickly arrived at optimized reaction conditions with less reagent use than a batch process (such as a fluid handling robot exploring multiple reaction conditions in parallel) would have. The device and method should now be amenable to much more complex molecular biology reactions whose variable spaces are correspondingly larger. PMID:21338059
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Mixed Methods in CAM Research: A Systematic Review of Studies Published in 2012
Bishop, Felicity L.; Holmes, Michelle M.
2013-01-01
Background. Mixed methods research uses qualitative and quantitative methods together in a single study or a series of related studies. Objectives. To review the prevalence and quality of mixed methods studies in complementary medicine. Methods. All studies published in the top 10 integrative and complementary medicine journals in 2012 were screened. The quality of mixed methods studies was appraised using a published tool designed for mixed methods studies. Results. 4% of papers (95 out of 2349) reported mixed methods studies, 80 of which met criteria for applying the quality appraisal tool. The most popular formal mixed methods design was triangulation (used by 74% of studies), followed by embedded (14%), sequential explanatory (8%), and finally sequential exploratory (5%). Quantitative components were generally of higher quality than qualitative components; when quantitative components involved RCTs they were of particularly high quality. Common methodological limitations were identified. Most strikingly, none of the 80 mixed methods studies addressed the philosophical tensions inherent in mixing qualitative and quantitative methods. Conclusions and Implications. The quality of mixed methods research in CAM can be enhanced by addressing philosophical tensions and improving reporting of (a) analytic methods and reflexivity (in qualitative components) and (b) sampling and recruitment-related procedures (in all components). PMID:24454489
Zonnevijlle, E D; Somia, N N; Abadia, G P; Stremel, R W; Maldonado, C J; Werker, P M; Kon, M; Barker, J H
2000-09-01
Dynamic graciloplasty is used as a treatment modality for total urinary incontinence caused by a paralyzed sphincter. A problem with this application is undesirable fatigue of the muscle caused by continuous electrical stimulation. Therefore, the neosphincter must be trained via a rigorous regimen to transform it from a fatigue-prone state to a fatigue-resistant state. To avoid or shorten this training period, the application of sequential segmental neuromuscular stimulation (SSNS) was examined. This form of stimulation proved previously to be highly effective in acutely reducing fatigue caused by electrical stimulation. The contractile function and perfusion of gracilis muscles employed as neosphincters were compared between conventional, single-channel, continuous stimulation, and multichannel sequential stimulation in 8 dogs. The sequentially stimulated neosphincter proved to have an endurance 2.9 times longer (as measured by halftime to fatigue) than continuous stimulation and a better blood perfusion during stimulation (both of which were significant changes, p < 0.05). Clinically, this will not antiquate training of the muscle, but SSNS could reduce the need for long and rigorous training protocols, making dynamic graciloplasty more attractive as a method of treating urinary or fecal incontinence.
Analyzing multicomponent receptive fields from neural responses to natural stimuli
Rowekamp, Ryan; Sharpee, Tatyana O
2011-01-01
The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916
Gao, Yan-Song; Su, Jing-Tan; Yan, Yong-Bin
2010-06-25
The non-cooperative or sequential events which occur during protein thermal denaturation are closely correlated with protein folding, stability, and physiological functions. In this research, the sequential events of human brain-type creatine kinase (hBBCK) thermal denaturation were studied by differential scanning calorimetry (DSC), CD, and intrinsic fluorescence spectroscopy. DSC experiments revealed that the thermal denaturation of hBBCK was calorimetrically irreversible. The existence of several endothermic peaks suggested that the denaturation involved stepwise conformational changes, which were further verified by the discrepancy in the transition curves obtained from various spectroscopic probes. During heating, the disruption of the active site structure occurred prior to the secondary and tertiary structural changes. The thermal unfolding and aggregation of hBBCK was found to occur through sequential events. This is quite different from that of muscle-type CK (MMCK). The results herein suggest that BBCK and MMCK undergo quite dissimilar thermal unfolding pathways, although they are highly conserved in the primary and tertiary structures. A minor difference in structure might endow the isoenzymes dissimilar local stabilities in structure, which further contribute to isoenzyme-specific thermal stabilities.
Pan, Chang-Jiang; Hou, Yan-Hua; Zhang, Bin-Bin; Zhang, Lin-Cai
2014-01-01
This paper presents a simple method to sequentially immobilize poly (ethylene glycol) (PEG) and albumin on titanium surface to enhance the blood compatibility. Attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR) analysis indicated that PEG and albumin were successfully immobilized on the titanium surface. Water contact angle results showed a better hydrophilic surface after the immobilization. The immobilized PEG or albumin can not only obviously prevent platelet adhesion and activation but also prolong activated partial thromboplastin time (APTT), leading to the improved anticoagulation. Moreover, immobilization of albumin on PEG-modified surface can further improve the anticoagulation. The approach in the present study provides an effective and efficient method to improve the anticoagulation of blood-contact biomedical devices such as coronary stents.
When to Use What Research Design
ERIC Educational Resources Information Center
Vogt, W. Paul; Gardner, Dianne C.; Haeffele, Lynne M.
2012-01-01
Systematic, practical, and accessible, this is the first book to focus on finding the most defensible design for a particular research question. Thoughtful guidelines are provided for weighing the advantages and disadvantages of various methods, including qualitative, quantitative, and mixed methods designs. The book can be read sequentially or…
It's Deja Vu All over Again: Using Multiple-Spell Discrete-Time Survival Analysis.
ERIC Educational Resources Information Center
Willett, John B.; Singer, Judith D.
1995-01-01
The multiple-spell discrete-time survival analysis method is introduced and illustrated using longitudinal data on exit from and reentry into the teaching profession. The method is applicable to many educational problems involving the sequential occurrence of disparate events or episodes. (SLD)
The Complexities of Teachers' Commitment to Environmental Education: A Mixed Methods Approach
ERIC Educational Resources Information Center
Sosu, Edward M.; McWilliam, Angus; Gray, Donald S.
2008-01-01
This article argues that a mixed methods approach is useful in understanding the complexity that underlies teachers' commitment to environmental education. Using sequential and concurrent procedures, the authors demonstrate how different methodological approaches highlighted different aspects of teacher commitment. The quantitative survey examined…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tfaily, Malak M.; Chu, Rosalie K.; Toyoda, Jason
A vast number of organic compounds are present in soil organic matter (SOM) and play an important role in the terrestrial carbon cycle, facilitate interactions between organisms, and represent a sink for atmospheric CO2. The diversity of different SOM compounds and their molecular characteristics is a function of the organic source material and biogeochemical history. By understanding how SOM composition changes with sources and the processes by which it is biogeochemically altered in different terrestrial ecosystems, it may be possible to predict nutrient and carbon cycling, response to system perturbations, and impact of climate change will have on SOM composition.more » In this study, a sequential chemical extraction procedure was developed to reveal the diversity of organic matter (OM) in different ecosystems and was compared to the previously published protocol using parallel solvent extraction (PSE). We compared six extraction methods using three sample types, peat soil, spruce forest soil and river sediment, so as to select the best method for extracting a representative fraction of organic matter from soils and sediments from a wide range of ecosystems. We estimated the extraction yield of dissolved organic carbon (DOC) by total organic carbon analysis, and measured the composition of extracted OM using high resolution mass spectrometry. This study showed that OM composition depends primarily on soil and sediment characteristics. Two sequential extraction protocols, progressing from polar to non-polar solvents, were found to provide the highest number and diversity of organic compounds extracted from the soil and sediments. Water (H2O) is the first solvent used for both protocols followed by either co-extraction with methanol-chloroform (MeOH-CHCl3) mixture, or acetonitrile (ACN) and CHCl3 sequentially. The sequential extraction protocol developed in this study offers improved sensitivity, and requires less sample compared to the PSE workflow where a new sample is used for each solvent type. Furthermore, a comparison of SOM composition from the different sample types revealed that our sequential protocol allows for ecosystem comparisons based on the diversity of compounds present, which in turn could provide new insights about source and processing of organic compounds in different soil and sediment types.« less
Tfaily, Malak M; Chu, Rosalie K; Toyoda, Jason; Tolić, Nikola; Robinson, Errol W; Paša-Tolić, Ljiljana; Hess, Nancy J
2017-06-15
A vast number of organic compounds are present in soil organic matter (SOM) and play an important role in the terrestrial carbon cycle, facilitate interactions between organisms, and represent a sink for atmospheric CO 2 . The diversity of different SOM compounds and their molecular characteristics is a function of the organic source material and biogeochemical history. By understanding how SOM composition changes with sources and the processes by which it is biogeochemically altered in different terrestrial ecosystems, it may be possible to predict nutrient and carbon cycling, response to system perturbations, and impact of climate change will have on SOM composition. In this study, a sequential chemical extraction procedure was developed to reveal the diversity of organic matter (OM) in different ecosystems and was compared to the previously published protocol using parallel solvent extraction (PSE). We compared six extraction methods using three sample types, peat soil, spruce forest soil and river sediment, so as to select the best method for extracting a representative fraction of organic matter from soils and sediments from a wide range of ecosystems. We estimated the extraction yield of dissolved organic carbon (DOC) by total organic carbon analysis, and measured the composition of extracted OM using high resolution mass spectrometry. This study showed that OM composition depends primarily on soil and sediment characteristics. Two sequential extraction protocols, progressing from polar to non-polar solvents, were found to provide the highest number and diversity of organic compounds extracted from the soil and sediments. Water (H 2 O) is the first solvent used for both protocols followed by either co-extraction with methanol-chloroform (MeOH-CHCl 3 ) mixture, or acetonitrile (ACN) and CHCl 3 sequentially. The sequential extraction protocol developed in this study offers improved sensitivity, and requires less sample compared to the PSE workflow where a new sample is used for each solvent type. Furthermore, a comparison of SOM composition from the different sample types revealed that our sequential protocol allows for ecosystem comparisons based on the diversity of compounds present, which in turn could provide new insights about source and processing of organic compounds in different soil and sediment types. Copyright © 2017 Elsevier B.V. All rights reserved.
Extreme Quantile Estimation in Binary Response Models
1990-03-01
in Cancer Research," Biometria , VoL 66, pp. 307-316. Hsi, B.P. [1969], ’The Multiple Sample Up-and-Down Method in Bioassay," Journal of the American...New Method of Estimation," Biometria , VoL 53, pp. 439-454. Wetherill, G.B. [1976], Sequential Methods in Statistics, London: Chapman and Hall. Wu, C.FJ
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
Simplified pupal surveys of Aedes aegypti (L.) for entomologic surveillance and dengue control.
Barrera, Roberto
2009-07-01
Pupal surveys of Aedes aegypti (L.) are useful indicators of risk for dengue transmission, although sample sizes for reliable estimations can be large. This study explores two methods for making pupal surveys more practical yet reliable and used data from 10 pupal surveys conducted in Puerto Rico during 2004-2008. The number of pupae per person for each sampling followed a negative binomial distribution, thus showing aggregation. One method found a common aggregation parameter (k) for the negative binomial distribution, a finding that enabled the application of a sequential sampling method requiring few samples to determine whether the number of pupae/person was above a vector density threshold for dengue transmission. A second approach used the finding that the mean number of pupae/person is correlated with the proportion of pupa-infested households and calculated equivalent threshold proportions of pupa-positive households. A sequential sampling program was also developed for this method to determine whether observed proportions of infested households were above threshold levels. These methods can be used to validate entomological thresholds for dengue transmission.
Lee, Yu; Yu, Chanki; Lee, Sang Wook
2018-01-10
We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.
A spatial scan statistic for multiple clusters.
Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie
2011-10-01
Spatial scan statistics are commonly used for geographical disease surveillance and cluster detection. While there are multiple clusters coexisting in the study area, they become difficult to detect because of clusters' shadowing effect to each other. The recently proposed sequential method showed its better power for detecting the second weaker cluster, but did not improve the ability of detecting the first stronger cluster which is more important than the second one. We propose a new extension of the spatial scan statistic which could be used to detect multiple clusters. Through constructing two or more clusters in the alternative hypothesis, our proposed method accounts for other coexisting clusters in the detecting and evaluating process. The performance of the proposed method is compared to the sequential method through an intensive simulation study, in which our proposed method shows better power in terms of both rejecting the null hypothesis and accurately detecting the coexisting clusters. In the real study of hand-foot-mouth disease data in Pingdu city, a true cluster town is successfully detected by our proposed method, which cannot be evaluated to be statistically significant by the standard method due to another cluster's shadowing effect. Copyright © 2011 Elsevier Inc. All rights reserved.
Identifying High-Rate Flows Based on Sequential Sampling
NASA Astrophysics Data System (ADS)
Zhang, Yu; Fang, Binxing; Luo, Hao
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
Rosende, Maria; Savonina, Elena Yu; Fedotov, Petr S; Miró, Manuel; Cerdà, Víctor; Wennrich, Rainer
2009-09-15
Dynamic fractionation has been recognized as an appealing alternative to conventional equilibrium-based sequential extraction procedures (SEPs) for partitioning of trace elements (TE) in environmental solid samples. This paper reports the first attempt for harmonization of flow-through dynamic fractionation using two novel methods, the so-called sequential injection microcolumn (SIMC) extraction and rotating coiled column (RCC) extraction. In SIMC extraction, a column packed with the solid sample is clustered in a sequential injection system, while in RCC, the particulate matter is retained under the action of centrifugal forces. In both methods, the leachants are continuously pumped through the solid substrates by the use of either peristaltic or syringe pumps. A five-step SEP was selected for partitioning of Cu, Pb and Zn in water soluble/exchangeable, acid-soluble, easily reducible, easily oxidizable and moderately reducible fractions from 0.2 to 0.5 g samples at an extractant flow rate of 1.0 mL min(-1) prior to leachate analysis by inductively coupled plasma-atomic emission spectrometry. Similarities and discrepancies between both dynamic approaches were ascertained by fractionation of TE in certified reference materials, namely, SRM 2711 Montana Soil and GBW 07311 sediment, and two real soil samples as well. Notwithstanding the different extraction conditions set by both methods, similar trends of metal distribution were in generally found. The most critical parameters for reliable assessment of mobilizable pools of TE in worse-case scenarios are the size-distribution of sample particles, the density of particles, the content of organic matter and the concentration of major elements. For reference materials and a soil rich in organic matter, the extraction in RCC results in slightly higher recoveries of environmentally relevant fractions of TE, whereas SIMC leaching is more effective for calcareous soils.
NASA Astrophysics Data System (ADS)
Murakami, H.; Chen, X.; Hahn, M. S.; Over, M. W.; Rockhold, M. L.; Vermeul, V.; Hammond, G. E.; Zachara, J. M.; Rubin, Y.
2010-12-01
Subsurface characterization for predicting groundwater flow and contaminant transport requires us to integrate large and diverse datasets in a consistent manner, and quantify the associated uncertainty. In this study, we sequentially assimilated multiple types of datasets for characterizing a three-dimensional heterogeneous hydraulic conductivity field at the Hanford 300 Area. The datasets included constant-rate injection tests, electromagnetic borehole flowmeter tests, lithology profile and tracer tests. We used the method of anchored distributions (MAD), which is a modular-structured Bayesian geostatistical inversion method. MAD has two major advantages over the other inversion methods. First, it can directly infer a joint distribution of parameters, which can be used as an input in stochastic simulations for prediction. In MAD, in addition to typical geostatistical structural parameters, the parameter vector includes multiple point values of the heterogeneous field, called anchors, which capture local trends and reduce uncertainty in the prediction. Second, MAD allows us to integrate the datasets sequentially in a Bayesian framework such that it updates the posterior distribution, as a new dataset is included. The sequential assimilation can decrease computational burden significantly. We applied MAD to assimilate different combinations of the datasets, and then compared the inversion results. For the injection and tracer test assimilation, we calculated temporal moments of pressure build-up and breakthrough curves, respectively, to reduce the data dimension. A massive parallel flow and transport code PFLOTRAN is used for simulating the tracer test. For comparison, we used different metrics based on the breakthrough curves not used in the inversion, such as mean arrival time, peak concentration and early arrival time. This comparison intends to yield the combined data worth, i.e. which combination of the datasets is the most effective for a certain metric, which will be useful for guiding the further characterization effort at the site and also the future characterization projects at the other sites.
Lichtenhan, JT; Hartsock, J; Dornhoffer, JR; Donovan, KM; Salt, AN
2016-01-01
Background Administering pharmaceuticals to the scala tympani of the inner ear is a common approach to study cochlear physiology and mechanics. We present here a novel method for in vivo drug delivery in a controlled manner to sealed ears. New method Injections of ototoxic solutions were applied from a pipette sealed into a fenestra in the cochlear apex, progressively driving solutions along the length of scala tympani toward the cochlear aqueduct at the base. Drugs can be delivered rapidly or slowly. In this report we focus on slow delivery in which the injection rate is automatically adjusted to account for varying cross sectional area of the scala tympani, therefore driving a solution front at uniform rate. Results Objective measurements originating from finely spaced, low- to high-characteristic cochlear frequency places were sequentially affected. Comparison with existing methods(s): Controlled administration of pharmaceuticals into the cochlear apex overcomes a number of serious limitations of previously established methods such as cochlear perfusions with an injection pipette in the cochlear base: The drug concentration achieved is more precisely controlled, drug concentrations remain in scala tympani and are not rapidly washed out by cerebrospinal fluid flow, and the entire length of the cochlear spiral can be treated quickly or slowly with time. Conclusions Controlled administration of solutions into the cochlear apex can be a powerful approach to sequentially effect objective measurements originating from finely spaced cochlear regions and allows, for the first time, the spatial origin of CAPs to be objectively defined. PMID:27506463
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-01
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks (LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods. PMID:28146106
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks.
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-30
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.
Transportation forecasting : analysis and quantitative methods
DOT National Transportation Integrated Search
1983-01-01
This Record contains the following papers: Development of Survey Instruments Suitable for Determining Non-Home Activity Patterns; Sequential, History-Dependent Approach to Trip-Chaining Behavior; Identifying Time and History Dependencies of Activity ...
Gantry for medical particle therapy facility
Trbojevic, Dejan
2013-04-23
A particle therapy gantry for delivering a particle beam to a patient includes a beam tube having a curvature defining a particle beam path and a plurality of superconducting, variable field magnets sequentially arranged along the beam tube for guiding the particle beam along the particle path. In a method for delivering a particle beam to a patient through a gantry, a particle beam is guided by a plurality of variable field magnets sequentially arranged along a beam tube of the gantry and the beam is alternately focused and defocused with alternately arranged focusing and defocusing variable field magnets.
Gantry for medical particle therapy facility
Trbojevic, Dejan [Wading River, NY
2012-05-08
A particle therapy gantry for delivering a particle beam to a patient includes a beam tube having a curvature defining a particle beam path and a plurality of fixed field magnets sequentially arranged along the beam tube for guiding the particle beam along the particle path. In a method for delivering a particle beam to a patient through a gantry, a particle beam is guided by a plurality of fixed field magnets sequentially arranged along a beam tube of the gantry and the beam is alternately focused and defocused with alternately arranged focusing and defocusing fixed field magnets.
Shortreed, Susan M.; Moodie, Erica E. M.
2012-01-01
Summary Treatment of schizophrenia is notoriously difficult and typically requires personalized adaption of treatment due to lack of efficacy of treatment, poor adherence, or intolerable side effects. The Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) Schizophrenia Study is a sequential multiple assignment randomized trial comparing the typical antipsychotic medication, perphenazine, to several newer atypical antipsychotics. This paper describes the marginal structural modeling method for estimating optimal dynamic treatment regimes and applies the approach to the CATIE Schizophrenia Study. Missing data and valid estimation of confidence intervals are also addressed. PMID:23087488
Portfolio Development as a Three-Semester Process: The Value of Sequential Experience.
ERIC Educational Resources Information Center
Senne, Terry A.
This study examined nine cohort teacher candidates from each of two physical education teacher education (PETE) programs developed teaching portfolios in three consecutive semesters of comparable courses: (1) elementary methods; (2) secondary methods; and (3) the student teaching internship. Studied were changes over time in teacher candidate…
Discovering the Sequential Structure of Thought
ERIC Educational Resources Information Center
Anderson, John R.; Fincham, Jon M.
2014-01-01
Multi-voxel pattern recognition techniques combined with Hidden Markov models can be used to discover the mental states that people go through in performing a task. The combined method identifies both the mental states and how their durations vary with experimental conditions. We apply this method to a task where participants solve novel…
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
a sequential extraction and separation procedure that maybe used in conjunction with a determinative method to differentiate mercury species that arepresent in soils and sediments. provides information on both total mercury andvarious mercury species.
Support, Belonging, Motivation, and Engagement in the College Classroom: A Mixed Method Study
ERIC Educational Resources Information Center
Zumbrunn, Sharon; McKim, Courtney; Buhs, Eric; Hawley, Leslie R.
2014-01-01
This explanatory sequential mixed methods study examined how belonging perceptions, academic motivation, and engagement might mediate the relationship between academic contextual characteristics and achievement using structural equation modeling and qualitative follow-up interviews with college students from a large, Midwestern university. In the…
Teacher Perceptions of Principals' Leadership Qualities: A Mixed Methods Study
ERIC Educational Resources Information Center
Hauserman, Cal P.; Ivankova, Nataliya V.; Stick, Sheldon L.
2013-01-01
This mixed methods sequential explanatory study utilized the Multi-factor Leadership Questionnaire, responses to open-ended questions, and in-depth interviews to identify transformational leadership qualities that were present among principals in Alberta, Canada. The first quantitative phase consisted of a random sample of 135 schools (with…
1994-09-01
Methods. We arbitrarily divide sequential synthesis methods into the following four categories: (1) traditional Langmuir - Blodgett methods, (2) techniques... Langmuir - Blodgett methods. Films from small amphiphilic molecules, fabricated by Langmuir - Blodgett methods, have been extensively investigated.", 48 -54,67...fabrication of Langmuir - Blodgett films. We shall see in the next section that photochemical reactions can be used to define buried channel waveguides
Lin, Lu; Wang, Yi-Ning; Kong, Ling-Yan; Jin, Zheng-Yu; Lu, Guang-Ming; Zhang, Zhao-Qi; Cao, Jian; Li, Shuo; Song, Lan; Wang, Zhi-Wei; Zhou, Kang; Wang, Ming
2013-01-01
Objective To evaluate the image quality (IQ) and radiation dose of 128-slice dual-source computed tomography (DSCT) coronary angiography using prospectively electrocardiogram (ECG)-triggered sequential scan mode compared with ECG-gated spiral scan mode in a population with atrial fibrillation. Methods Thirty-two patients with suspected coronary artery disease and permanent atrial fibrillation referred for a second-generation 128-slice DSCT coronary angiography were included in the prospective study. Of them, 17 patients (sequential group) were randomly selected to use a prospectively ECG-triggered sequential scan, while the other 15 patients (spiral group) used a retrospectively ECG-gated spiral scan. The IQ was assessed by two readers independently, using a four-point grading scale from excel-lent (grade 1) to non-assessable (grade 4), based on the American Heart Association 15-segment model. IQ of each segment and effective dose of each patient were compared between the two groups. Results The mean heart rate (HR) of the sequential group was 96±27 beats per minute (bpm) with a variation range of 73±25 bpm, while the mean HR of the spiral group was 86±22 bpm with a variationrange of 65±24 bpm. Both of the mean HR (t=1.91, P=0.243) and HR variation range (t=0.950, P=0.350) had no significant difference between the two groups. In per-segment analysis, IQ of the sequential group vs. spiral group was rated as excellent (grade 1) in 190/244 (78%) vs. 177/217 (82%) by reader1 and 197/245 (80%) vs. 174/214 (81%) by reader2, as non-assessable (grade 4) in 4/244 (2%) vs. 2/217 (1%) by reader1 and 6/245 (2%) vs. 4/214 (2%) by reader2. Overall averaged IQ per-patient in the sequential and spiral group showed equally good (1.27±0.19 vs. 1.25±0.22, Z=-0.834, P=0.404). The effective radiation dose of the sequential group reduced significantly compared with the spiral group (4.88±1.77 mSv vs. 10.20±3.64 mSv; t=-5.372, P=0.000). Conclusion Compared with retrospectively ECG-gated spiral scan, prospectively ECG-triggered sequential DSCT coronary angiography provides similarly diagnostically valuable images in patients with atrial fibrillation and significantly reduces radiation dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besemer, A; Marsh, I; Bednarz, B
Purpose: The calculation of 3D internal dose calculations in targeted radionuclide therapy requires the acquisition and temporal coregistration of a serial PET/CT or SPECT/CT images. This work investigates the dosimetric impact of different temporal coregistration methods commonly used for 3D internal dosimetry. Methods: PET/CT images of four mice were acquired at 1, 24, 48, 72, 96, 144 hrs post-injection of {sup 124}I-CLR1404. The therapeutic {sup 131}I-CLR1404 absorbed dose rate (ADR) was calculated at each time point using a Geant4-based MC dosimetry platform using three temporal image coregistration Methods: (1) no coregistration (NC), whole body sequential CT-CT affine coregistration (WBAC), andmore » individual sequential ROI-ROI affine coregistration (IRAC). For NC, only the ROI mean ADR was integrated to obtain ROI mean doses. For WBAC, the CT at each time point was coregistered to a single reference CT. The CT transformations were applied to the corresponding ADR images and the dose was calculated on a voxel-basis within the whole CT volume. For IRAC, each individual ROI was isolated and sequentially coregistered to a single reference ROI. The ROI transformations were applied to the corresponding ADR images and the dose was calculated on a voxel-basis within the ROI volumes. Results: The percent differences in the ROI mean doses were as large as 109%, 88%, and 32%, comparing the WBAC vs. IRAC, NC vs. IRAC, and NC vs. WBAC methods, respectively. The CoV in the mean dose between the all three methods ranged from 2–36%. The pronounced curvature of the spinal cord was not adequately coregistered using WBAC which resulted in large difference between the WBAC and IRAC. Conclusion: The method used for temporal image coregistration can result in large differences in 3D internal dosimetry calculations. Care must be taken to choose the most appropriate method depending on the imaging conditions, clinical site, and specific application. This work is partially funded by NIH Grant R21 CA198392-01.« less
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM 2.5 and PM −2.5.
Code of Federal Regulations, 2014 CFR
2014-07-01
... section. All reference method samplers shall be of single-filter design (not multi-filter, sequential sample design). Each candidate method shall be setup and operated in accordance with its associated... precision specified in table C-4 of this subpart. (g) Test for additive and multiplicative bias (comparative...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM 2.5 and PM −2.5.
Code of Federal Regulations, 2013 CFR
2013-07-01
... section. All reference method samplers shall be of single-filter design (not multi-filter, sequential sample design). Each candidate method shall be setup and operated in accordance with its associated... precision specified in table C-4 of this subpart. (g) Test for additive and multiplicative bias (comparative...
Estimating fish populations by removal methods with minnow traps in southeast Alaska streams.
M.D. Bryant
2002-01-01
Passive capture methods, such as minnow traps, are commonly used to capture fish for mark-recapture population estimates; however, they have not been used for removal methods. Minnow traps set for 90-min periods during three or four sequential capture occasions during the summer of 1996 were used to capture coho salmon Oncorhynchus kisutch fry and...
ERIC Educational Resources Information Center
Deignan, Tim; Brown, Sally
2016-01-01
This article reports on an exploratory two-stage sequential mixed methods research study that investigated the views of university educators on the introduction of assessment methods other than essays, exams and dissertations within taught Masters programmes. In the first stage, interviews were conducted internationally with 45 participants and…
ERIC Educational Resources Information Center
Wauters, E.; Mathijs, E.
2013-01-01
Purpose: The aim of this article is to present and apply a method to investigate farmers' socio-psychological determinants of conservation practice adoption, as an aid in extension, policy and conservation practice design. Design/methodology/approach: We use a sequential mixed method, starting with qualitative semi-structured interviews (n = 24),…
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Patterns and Sequences: Interactive Exploration of Clickstreams to Understand Common Visitor Paths.
Liu, Zhicheng; Wang, Yang; Dontcheva, Mira; Hoffman, Matthew; Walker, Seth; Wilson, Alan
2017-01-01
Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.
NASA Astrophysics Data System (ADS)
Granade, Christopher; Wiebe, Nathan
2017-08-01
A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.
Sun, Zeyu; Hamilton, Karyn L.; Reardon, Kenneth F.
2014-01-01
We evaluated a sequential elution protocol from immobilized metal affinity chromatography (SIMAC) employing gallium-based immobilized metal affinity chromatography (IMAC) in conjunction with titanium-dioxide-based metal oxide affinity chromatography (MOAC). The quantitative performance of this SIMAC enrichment approach, assessed in terms of repeatability, dynamic range, and linearity, was evaluated using a mixture composed of tryptic peptides from caseins, bovine serum albumin, and phosphopeptide standards. While our data demonstrate the overall consistent performance of the SIMAC approach under various loading conditions, the results also revealed that the method had limited repeatability and linearity for most phosphopeptides tested, and different phosphopeptides were found to have different linear ranges. These data suggest that, unless additional strategies are used, SIMAC should be regarded as a semi-quantitative method when used in large-scale phosphoproteomics studies in complex backgrounds. PMID:24096195
Regeneration of strong-base anion-exchange resins by sequential chemical displacement
Brown, Gilbert M.; Gu, Baohua; Moyer, Bruce A.; Bonnesen, Peter V.
2002-01-01
A method for regenerating strong-base anion exchange resins utilizing a sequential chemical displacement technique with new regenerant formulation. The new first regenerant solution is composed of a mixture of ferric chloride, a water-miscible organic solvent, hydrochloric acid, and water in which tetrachloroferrate anion is formed and used to displace the target anions on the resin. The second regenerant is composed of a dilute hydrochloric acid and is used to decompose tetrachloroferrate and elute ferric ions, thereby regenerating the resin. Alternative chemical displacement methods include: (1) displacement of target anions with fluoroborate followed by nitrate or salicylate and (2) displacement of target anions with salicylate followed by dilute hydrochloric acid. The methodology offers an improved regeneration efficiency, recovery, and waste minimization over the conventional displacement technique using sodium chloride (or a brine) or alkali metal hydroxide.
Mining local climate data to assess spatiotemporal dengue fever epidemic patterns in French Guiana.
Flamand, Claude; Fabregue, Mickael; Bringay, Sandra; Ardillon, Vanessa; Quénel, Philippe; Desenclos, Jean-Claude; Teisseire, Maguelonne
2014-10-01
To identify local meteorological drivers of dengue fever in French Guiana, we applied an original data mining method to the available epidemiological and climatic data. Through this work, we also assessed the contribution of the data mining method to the understanding of factors associated with the dissemination of infectious diseases and their spatiotemporal spread. We applied contextual sequential pattern extraction techniques to epidemiological and meteorological data to identify the most significant climatic factors for dengue fever, and we investigated the relevance of the extracted patterns for the early warning of dengue outbreaks in French Guiana. The maximum temperature, minimum relative humidity, global brilliance, and cumulative rainfall were identified as determinants of dengue outbreaks, and the precise intervals of their values and variations were quantified according to the epidemiologic context. The strongest significant correlations were observed between dengue incidence and meteorological drivers after a 4-6-week lag. We demonstrated the use of contextual sequential patterns to better understand the determinants of the spatiotemporal spread of dengue fever in French Guiana. Future work should integrate additional variables and explore the notion of neighborhood for extracting sequential patterns. Dengue fever remains a major public health issue in French Guiana. The development of new methods to identify such specific characteristics becomes crucial in order to better understand and control spatiotemporal transmission. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Minimising back reflections from the common path objective in a fundus camera
NASA Astrophysics Data System (ADS)
Swat, A.
2016-11-01
Eliminating back reflections is critical in the design of a fundus camera with internal illuminating system. As there is very little light reflected from the retina, even excellent antireflective coatings are not sufficient suppression of ghost reflections, therefore the number of surfaces in the common optics in illuminating and imaging paths shall be minimised. Typically a single aspheric objective is used. In the paper an alternative approach, an objective with all spherical surfaces, is presented. As more surfaces are required, more sophisticated method is needed to get rid of back reflections. Typically back reflections analysis, comprise treating subsequent objective surfaces as mirrors, and reflections from the objective surfaces are traced back through the imaging path. This approach can be applied in both sequential and nonsequential ray tracing. It is good enough for system check but not very suitable for early optimisation process in the optical system design phase. There are also available standard ghost control merit function operands in the sequential ray-trace, for example in Zemax system, but these don't allow back ray-trace in an alternative optical path, illumination vs. imaging. What is proposed in the paper, is a complete method to incorporate ghost reflected energy into the raytracing system merit function for sequential mode which is more efficient in optimisation process. Although developed for the purpose of specific case of fundus camera, the method might be utilised in a wider range of applications where ghost control is critical.
NASA Astrophysics Data System (ADS)
Domec, Brennan S.
In today's industry, engineering materials are continuously pushed to the limits. Often, the application only demands high-specification properties in a narrowly-defined region of the material, such as the outermost surface. This, in combination with the economic benefits, makes case hardening an attractive solution to meet industry demands. While case hardening has been in use for decades, applications demanding high hardness, deep case depth, and high corrosion resistance are often under-served by this process. Instead, new solutions are required. The goal of this study is to develop and characterize a new borochromizing process applied to a pre-carburized AISI 8620 alloy steel. The process was successfully developed using a combination of computational simulations, calculations, and experimental testing. Process kinetics were studied by fitting case depth measurement data to Fick's Second Law of Diffusion and an Arrhenius equation. Results indicate that the kinetics of the co-diffusion method are unaffected by the addition of chromium to the powder pack. The results also show that significant structural degradation of the case occurs when chromizing is applied sequentially to an existing boronized case. The amount of degradation is proportional to the chromizing parameters. Microstructural evolution was studied using metallographic methods, simulation and computational calculations, and analytical techniques. While the co-diffusion process failed to enrich the substrate with chromium, significant enrichment is obtained with the sequential diffusion process. The amount of enrichment is directly proportional to the chromizing parameters with higher parameters resulting in more enrichment. The case consists of M7C3 and M23C6 carbides nearest the surface, minor amounts of CrB, and a balance of M2B. Corrosion resistance was measured with salt spray and electrochemical methods. These methods confirm the benefit of surface enrichment by chromium in the sequential diffusion method with corrosion resistance increasing directly with chromium concentration. The results also confirm the deleterious effect of surface-breaking case defects and the need to reduce or eliminate them. The best combination of microstructural integrity, mean surface hardness, effective case depth, and corrosion resistance is obtained in samples sequentially boronized and chromized at 870°C for 6hrs. Additional work is required to further optimize process parameters and case properties.
Liu, Yang; Luo, Zhi-Qiang; Lv, Bei-Ran; Zhao, Hai-Yu; Dong, Ling
2016-04-01
The multiple components in Chinese herbal medicines (CHMS) will experience complex absorption and metabolism before entering the blood system. Previous studies often lay emphasis on the components in blood. However, the dynamic and sequential absorption and metabolism process following multi-component oral administration has not been studied. In this study, the in situ closed-loop method combined with LC-MS techniques were employed to study the sequential process of Chuanxiong Rhizoma decoction (RCD). A total of 14 major components were identified in RCD. Among them, ferulic acid, senkyunolide J, senkyunolide I, senkyunolide F, senkyunolide G, and butylidenephthalide were detected in all of the samples, indicating that the six components could be absorbed into blood in prototype. Butylphthalide, E-ligustilide, Z-ligustilide, cnidilide, senkyunolide A and senkyunolide Q were not detected in all the samples, suggesting that the six components may not be absorbed or metabolized before entering the hepatic portal vein. Senkyunolide H could be metabolized by the liver, while senkyunolide M could be metabolized by both liver and intestinal flora. This study clearly demonstrated the changes in the absorption and metabolism process following multi-component oral administration of RCD, so as to convert the static multi-component absorption process into a comprehensive dynamic and continuous absorption and metabolism process. Copyright© by the Chinese Pharmaceutical Association.
A proposed method to detect kinematic differences between and within individuals.
Frost, David M; Beach, Tyson A C; McGill, Stuart M; Callaghan, Jack P
2015-06-01
The primary objective was to examine the utility of a novel method of detecting "actual" kinematic changes using the within-subject variation. Twenty firefighters were assigned to one of two groups (lifting or firefighting). Participants performed 25 repetitions of two lifting or firefighting tasks, in three sessions. The magnitude and within-subject variation of several discrete kinematic measures were computed. Sequential averages of each variable were used to derive a cubic, quadratic and linear regression equation. The efficacy of each equation was examined by contrasting participants' sequential means to their 25-trial mean±1SD and 2SD. The magnitude and within-subject variation of each dependent measure was repeatable for all tasks; however, each participant did not exhibit the same movement patterns as the group. The number of instances across all variables, tasks and testing sessions whereby the 25-trial mean±1SD was contained within the boundaries established by the regression equations increased as the aggregate scores included more trials. Each equation achieved success in at least 88% of all instances when three trials were included in the sequential mean (95% with five trials). The within-subject variation may offer a means to examine participant-specific changes without having to collect a large number of trials. Copyright © 2015 Elsevier Ltd. All rights reserved.
Takita, Eiji; Kohda, Katsunori; Tomatsu, Hajime; Hanano, Shigeru; Moriya, Kanami; Hosouchi, Tsutomu; Sakurai, Nozomu; Suzuki, Hideyuki; Shinmyo, Atsuhiko; Shibata, Daisuke
2013-01-01
Ligation, the joining of DNA fragments, is a fundamental procedure in molecular cloning and is indispensable to the production of genetically modified organisms that can be used for basic research, the applied biosciences, or both. Given that many genes cooperate in various pathways, incorporating multiple gene cassettes in tandem in a transgenic DNA construct for the purpose of genetic modification is often necessary when generating organisms that produce multiple foreign gene products. Here, we describe a novel method, designated PRESSO (precise sequential DNA ligation on a solid substrate), for the tandem ligation of multiple DNA fragments. We amplified donor DNA fragments with non-palindromic ends, and ligated the fragment to acceptor DNA fragments on solid beads. After the final donor DNA fragments, which included vector sequences, were joined to the construct that contained the array of fragments, the ligation product (the construct) was thereby released from the beads via digestion with a rare-cut meganuclease; the freed linear construct was circularized via an intra-molecular ligation. PRESSO allowed us to rapidly and efficiently join multiple genes in an optimized order and orientation. This method can overcome many technical challenges in functional genomics during the post-sequencing generation. PMID:23897972
Multigrid methods in structural mechanics
NASA Technical Reports Server (NTRS)
Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.
1986-01-01
Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.
Phosphorus Concentrations in Sequentially Fractionated Soil Samples as Affected by Digestion Methods
do Nascimento, Carlos A. C.; Pagliari, Paulo H.; Schmitt, Djalma; He, Zhongqi; Waldrip, Heidi
2015-01-01
Sequential fractionation has helped improving our understanding of the lability and bioavailability of P in soil. Nevertheless, there have been no reports on how manipulation of the different fractions prior to analyses affects the total P (TP) concentrations measured. This study investigated the effects of sample digestion, filtration, and acidification on the TP concentrations determined by ICP-OES in 20 soil samples. Total P in extracts were either determined without digestion by ICP-OES, or ICP-OES following block digestion, or autoclave digestion. The effects of sample filtration, and acidification on undigested alkaline extracts prior to ICP-OES were also evaluated. Results showed that, TP concentrations were greatest in the block-digested extracts, though the variability introduced by the block-digestion was the highest. Acidification of NaHCO3 extracts resulted in lower TP concentrations, while acidification of NaOH randomly increased or decreased TP concentrations. The precision observed with ICP-OES of undigested extracts suggests this should be the preferred method for TP determination in sequentially extracted samples. Thus, observations reported in this work would be helpful in appropriate sample handling for P determination, thereby improving the precision of P determination. The results are also useful for literature data comparison and discussion when there are differences in sample treatments. PMID:26647644
Shen, J Q; Ji, Q; Ding, W J; Xia, L M; Wei, L; Wang, C S
2018-03-13
Objective: To evaluate in-hospital and mid-term outcomes of sequential versus separate grafting of in situ skeletonized left internal mammary artery (LIMA) to the left coronary system in a single-center, propensity-matched study. Methods: After propensity score matching, 120 pairs of patients undergoing first, scheduled, isolated coronary artery bypass grafting (CABG) with in situ skeletonized LIMA grafting to the left anterior descending artery (LAD) territory were entered into a sequential group (sequential grafting of LIMA to the diagonal artery and then to the LAD) or a control group (separate grafting of LIMA to the LAD). The in-hospital and follow-up clinical outcomes and follow-up LIMA graft patency were compared. Results: The two propensity score-matched groups had similar in-hospital and follow-up clinical outcomes. The number of bypass conduits ranged from 3 to 6 (with a mean of 3.5), and 91.3%(219/240)of the included patients received off-pump CABG surgery. No significant differences were found between the two propensity score-matched groups in the in-hospital outcomes, including in-hospital death and the incidence of complications associated with CABG (prolonged ventilation, peroperative stroke, re-operation before discharge, and deep sternal wound infection). During follow-up, 9 patients (4 patients from the sequential group and 5 patients from the control group) died, and the all-cause mortality rate was 3.9%. No significant difference was found in the all-cause mortality rate between the 2 groups[3.4% (4/116) vs 4.3% (5/115), P =0.748]. During follow-up period, 99.1% (115/116) patency for the diagonal site and 98.3% (114/116) for the LAD site were determined by coronary computed tomographic angiography after sequential LIMA grafting, both of which were similar with graft patency of separate grafting of in situ skeletonized LIMA to the LAD. Conclusions: Revascularization of the left coronary system using a skeletonized LIMA resulted in excellent in-hospital and mid-term clinical outcomes and graft patency using sequential grafting.
Colligan, Lacey; Anderson, Janet E; Potts, Henry W W; Berman, Jonathan
2010-01-07
Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments. A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams. Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted. The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured.
Garey, Lorra; Cheema, Mina K; Otal, Tanveer K; Schmidt, Norman B; Neighbors, Clayton; Zvolensky, Michael J
2016-10-01
Smoking rates are markedly higher among trauma-exposed individuals relative to non-trauma-exposed individuals. Extant work suggests that both perceived stress and negative affect reduction smoking expectancies are independent mechanisms that link trauma-related symptoms and smoking. Yet, no work has examined perceived stress and negative affect reduction smoking expectancies as potential explanatory variables for the relation between trauma-related symptom severity and smoking in a sequential pathway model. Methods The present study utilized a sample of treatment-seeking, trauma-exposed smokers (n = 363; 49.0% female) to examine perceived stress and negative affect reduction expectancies for smoking as potential sequential explanatory variables linking trauma-related symptom severity and nicotine dependence, perceived barriers to smoking cessation, and severity of withdrawal-related problems and symptoms during past quit attempts. As hypothesized, perceived stress and negative affect reduction expectancies had a significant sequential indirect effect on trauma-related symptom severity and criterion variables. Findings further elucidate the complex pathways through which trauma-related symptoms contribute to smoking behavior and cognitions, and highlight the importance of addressing perceived stress and negative affect reduction expectancies in smoking cessation programs among trauma-exposed individuals. (Am J Addict 2016;25:565-572). © 2016 American Academy of Addiction Psychiatry.
Galletly, Cherrie A; Carnell, Benjamin L; Clarke, Patrick; Gill, Shane
2017-03-01
A great deal of research has established the efficacy of repetitive transcranial magnetic stimulation (rTMS) in the treatment of depression. However, questions remain about the optimal method to deliver treatment. One area requiring consideration is the difference in efficacy between bilateral and unilateral treatment protocols. This study aimed to compare the effectiveness of sequential bilateral rTMS and right unilateral rTMS. A total of 135 patients participated in the study, receiving either bilateral rTMS (N = 57) or right unilateral rTMS (N = 78). Treatment response was assessed using the Hamilton depression rating scale. Sequential bilateral rTMS had a higher response rate than right unilateral (43.9% vs 30.8%), but this difference was not statistically significant. This was also the case for remission rates (33.3% vs 21.8%, respectively). Controlling for pretreatment severity of depression, the results did not indicate a significant difference between the protocols with regard to posttreatment Hamilton depression rating scale scores. The current study found no statistically significant differences in response and remission rates between sequential bilateral rTMS and right unilateral rTMS. Given the shorter treatment time and the greater safety and tolerability of right unilateral rTMS, this may be a better choice than bilateral treatment in clinical settings.
Multigrid methods with space–time concurrency
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...
2017-10-06
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
NASA Astrophysics Data System (ADS)
Vimmrová, Alena; Kočí, Václav; Krejsová, Jitka; Černý, Robert
2016-06-01
A method for lightweight-gypsum material design using waste stone dust as the foaming agent is described. The main objective is to reach several physical properties which are inversely related in a certain way. Therefore, a linear optimization method is applied to handle this task systematically. The optimization process is based on sequential measurement of physical properties. The results are subsequently point-awarded according to a complex point criterion and new composition is proposed. After 17 trials the final mixture is obtained, having the bulk density equal to (586 ± 19) kg/m3 and compressive strength (1.10 ± 0.07) MPa. According to a detailed comparative analysis with reference gypsum, the newly developed material can be used as excellent thermally insulating interior plaster with the thermal conductivity of (0.082 ± 0.005) W/(m·K). In addition, its practical application can bring substantial economic and environmental benefits as the material contains 25 % of waste stone dust.
Inferring Interaction Force from Visual Information without Using Physical Force Sensors.
Hwang, Wonjun; Lim, Soo-Chul
2017-10-26
In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.
NASA Technical Reports Server (NTRS)
Cohn, S. E.
1982-01-01
Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.
Multigrid methods with space–time concurrency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
NASA Astrophysics Data System (ADS)
Peña-Vázquez, E.; Barciela-Alonso, M. C.; Pita-Calvo, C.; Domínguez-González, R.; Bermejo-Barrera, P.
2015-09-01
The objective of this work is to develop a method for the determination of metals in saline matrices using high-resolution continuum source flame atomic absorption spectrometry (HR-CS FAAS). Module SFS 6 for sample injection was used in the manual mode, and flame operating conditions were selected. The main absorption lines were used for all the elements, and the number of selected analytical pixels were 5 (CP±2) for Cd, Cu, Fe, Ni, Pb and Zn, and 3 pixels for Mn (CP±1). Samples were acidified (0.5% (v/v) nitric acid), and the standard addition method was used for the sequential determination of the analytes in diluted samples (1:2). The method showed good precision (RSD(%) < 4%, except for Pb (6.5%)) and good recoveries. Accuracy was checked after the analysis of an SPS-WW2 wastewater reference material diluted with synthetic seawater (dilution 1:2), showing a good agreement between certified and experimental results.
Binary tree eigen solver in finite element analysis
NASA Technical Reports Server (NTRS)
Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.
1993-01-01
This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.
An exploratory sequential design to validate measures of moral emotions.
Márquez, Margarita G; Delgado, Ana R
2017-05-01
This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, Matthew J.; Carter, Jennifer C.; Maclellan, Jay A.
2011-08-01
In the event of an accidental or intentional release of radionuclides into a populated area, three things must occur in a timely manner: food and drinking water supplies must be determined to be safe to eat / drink, civilians and/or military personnel must be surveyed to ensure that they do not have external contamination, and they must be screened to ensure that significant ingestion or inhalation of radionuclides has not occurred (this paper is concerned with the latter). In the event of such a disaster, the volume of radiobioassays to be performed would be tremendous. If the event released significantmore » levels of β- or α-emitting radionuclides, in vivo assays would be ineffective. Therefore, highly efficient and rapid analytical methods for radionuclide detection from submitted spot urine samples (≤ 50 mL) would be required. At present, the quantitative determination of α-emitting radionuclides from urine samples is highly labor intensive, and requires significant sample preparation and analysis time. Sorbent materials that provide effective collection and enable rapid assay could significantly streamline the radioanalytical process. We have demonstrated the use of paramagnetic nanoparticles as a novel class of extracting media for four α-emitting radionuclides of concern (Po, Ra, Am, and U) from chemically unmodified and pH 2 human urine. Herein the initial experimental sorption results are presented along with a novel method that utilizes paramagnetic nanoparticles for the extraction of radionuclides from unmodified human urine followed by the magnetic field-induced collection of the particles for subsequent α-counting-source preparation. Additionally, we construct a versatile human dose model that determines the detector count times required to estimate internal human dose at specific protective action thresholds. The model provides a means to assess a method’s detection capabilities and use fundamental health physics parameters and actual experimental data as core variables. The modeling shows that with effective sorbent materials, rapid screening for internalized α-emitters is possible from a 50 mL spot urine sample volume collected within one week of exposure/intake.« less
Quantitation of lead-210 (210Pb) using lead-203 (203Pb) as a "Massless" yield tracer.
May, D; Nelson, A N; Schultz, M K
2017-05-01
Determination of Pb-210 ( 210 Pb) in aqueous solution is a common radioanalytical challenge in environmental science. Widely used methods for undertaking these analyses (e.g., ASTM D7535) rely on the use of stable lead (Pb) as a yield tracer that takes into account losses of 210 Pb that inevitably occur during elemental/radiochemical separations of the procedures. Although effective, these methods introduce technical challenges that can be difficult to track and potentially introduce uncertainty that can be difficult to quantify. Examples of these challenges include interference from endogenous stable Pb in complex sample matrices; contamination of stable Pb carrier with 210 Pb; and high detection limits due to counting efficiency limitations. We hypothesized that many of these challenges could be avoided by the use of the electron-capture, gamma-emitting isotope, 203 Pb as a chemical yield tracer in the analysis of 210 Pb. A series of experiments were performed to evaluate the efficacy of 203 Pb as a tracer. Four different matrices were analyzed, including a complex matrix (hydraulic-fracturing produced fluids); and samples comprising less complicated matrices (i.e., river water, deionized water, and tap water). Separation techniques and counting methodologies were also compared and optimized. Due to a relatively short-half life (52 h), 203 Pb tracer is effectively massless for the purposes of chemical separations, allowing for reduced chromatography column resin bed volumes. Because 203 Pb is a gamma emitter (279 keV; 81% intensity), recovery can be determined non-destructively in a variety of matrices, including liquid scintillation cocktail. The use of liquid scintillation as a counting methodology allowed for determination of 210 Pb activities via 210 Pb or 210 Po; and recoveries of greater than 90% are routinely achievable using this approach. The improved method for the analysis of 210 Pb in aqueous matrices allows for the analysis of complex matrices, at reduced cost, while providing greater counting flexibility in achieving acceptable detections limits. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Crede, Erin; Borrego, Maura
2013-01-01
As part of a sequential exploratory mixed methods study, 9 months of ethnographically guided observations and interviews were used to develop a survey examining graduate engineering student retention. Findings from the ethnographic fieldwork yielded several themes, including international diversity, research group organization and climate,…
A Study on the Spatial Abilities of Prospective Social Studies Teachers: A Mixed Method Research
ERIC Educational Resources Information Center
Yurt, Eyüp; Tünkler, Vural
2016-01-01
This study investigated prospective social studies teachers' spatial abilities. It was conducted with 234 prospective teachers attending Social Studies Teaching departments at Education Faculties of two universities in Central and Southern Anatolia. This study, designed according to the explanatory-sequential design, is a mixed research method,…
Toward a Better Understanding of Student Perceptions of Writing Feedback: A Mixed Methods Study
ERIC Educational Resources Information Center
Zumbrunn, Sharon; Marrs, Sarah; Mewborn, Caitlin
2016-01-01
This explanatory sequential mixed methods study investigated the writing feedback perceptions of middle and high school students (N = 598). The predictive and mediational roles of writing self-efficacy and perceptions of writing feedback on student writing self-regulation aptitude were examined using mediation regression analysis. To augment the…
ERIC Educational Resources Information Center
LaSota, Robin Rae
2013-01-01
My dissertation utilizes an explanatory, sequential mixed-methods research design to assess factors influencing community college students' transfer probability to baccalaureate-granting institutions and to present promising practices in colleges and states directed at improving upward transfer, particularly for low-income and first-generation…
Implementing a Flipped Classroom Approach in a University Numerical Methods Mathematics Course
ERIC Educational Resources Information Center
Johnston, Barbara M.
2017-01-01
This paper describes and analyses the implementation of a "flipped classroom" approach, in an undergraduate mathematics course on numerical methods. The approach replaced all the lecture contents by instructor-made videos and was implemented in the consecutive years 2014 and 2015. The sequential case study presented here begins with an…
Behavioral Talk-Write as a Method for Teaching Technical Editing.
ERIC Educational Resources Information Center
Gilbertsen, Michael; Killingsworth, M. Jimmie
1987-01-01
Presents a process-oriented method for teachers of stylistic editing workshops that allows them to (1) focus on individual students, (2) start with students basic repertory of responses and build from there, (3) work with freely emitted behavior, (4) ensure frequent and brief responses, and (5) achieve desired behavior through sequential steps.…
16 CFR 1500.41 - Method of testing primary irritant substances.
Code of Federal Regulations, 2014 CFR
2014-01-01
... corrosivity properties of substances, including testing that does not require animals, are presented in the CPSC's animal testing policy set forth in 16 CFR 1500.232. A weight-of-evidence analysis or a validated... conducted, a sequential testing strategy is recommended to reduce the number of test animals. The method of...
Technology Adoption in Secondary Mathematics Teaching in Kenya: An Explanatory Mixed Methods Study
ERIC Educational Resources Information Center
Kamau, Leonard Mwathi
2014-01-01
This study examined the factors related to technology adoption by secondary mathematics teachers in Nyandarua and Nairobi counties in the Republic of Kenya. Using a sequential explanatory mixed methods approach, I collected qualitative data from interviews and classroom observations of six teachers to better understand statistical results from the…
ERIC Educational Resources Information Center
McWayne, Christine M.; Mattis, Jacqueline S.; Green Wright, Linnie E.; Limlingan, Maria Cristina; Harris, Elise
2017-01-01
Research Findings: This within-group exploratory sequential mixed-methods investigation sought to identify how ethnically diverse, urban-residing, low-income Black families conceptualize positive parenting. During the item development phase 119 primary caregivers from Head Start programs participated in focus groups and interviews. These…
Ultra-low level plutonium isotopes in the NIST SRM 4355A (Peruvian Soil-1).
Inn, Kenneth G W; LaRosa, Jerome; Nour, Svetlana; Brooks, George; LaMont, Steve; Steiner, Rob; Williams, Ross; Patton, Brad; Bostick, Debbie; Eiden, Gregory; Petersen, Steve; Douglas, Matthew; Beals, Donna; Cadieux, James; Hall, Greg; Goldberg, Steve; Vogt, Stephan
2009-05-01
For more than 20 years, countries and their agencies which monitor radionuclide discharge sites and storage facilities have relied on the National Institute of Standards and Technology (NIST) Standard Reference Material (SRM) 4355 Peruvian Soil. Its low fallout contamination makes it an ideal soil blank for measurements associated with terrestrial-pathway-to-man studies. Presently, SRM 4355 is out of stock, and a new batch of the Peruvian soil is currently under development as future NIST SRM 4355A. Both environmental radioanalytical laboratories and mass spectrometry communities will benefit from the use of this SRM. The former must assess their laboratory procedural contamination and measurement detection limits by measurement of blank sample material. The Peruvian Soil is so low in anthropogenic radionuclide content that it is a suitable virtual blank. On the other hand, mass spectrometric laboratories have high sensitivity instruments that are capable of quantitative isotopic measurements at low plutonium levels in the SRM 4355 (first Peruvian Soil SRM) that provided the mass spectrometric community with the calibration, quality control, and testing material needed for methods development and legal defensibility. The quantification of the ultra-low plutonium content in the SRM 4355A was a considerable challenge for the mass spectrometric laboratories. Careful blank control and correction, isobaric interferences, instrument stability, peak assessment, and detection assessment were necessary. Furthermore, a systematic statistical evaluation of the measurement results and considerable discussions with the mass spectroscopy metrologists were needed to derive the certified values and uncertainties. The one sided upper limit of the 95% tolerance with 95% confidence for the massic (239)Pu content in SRM 4355A is estimated to be 54,000 atoms/g.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Efficient sequential and parallel algorithms for record linkage
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837
Pamnani, Shitaldas J.; Nyitray, Alan G.; Abrahamsen, Martha; Rollison, Dana E.; Villa, Luisa L.; Lazcano-Ponce, Eduardo; Huang, Yangxin; Borenstein, Amy; Giuliano, Anna R.
2016-01-01
Background. The purpose of this study was to assess the risk of sequential acquisition of anal human papillomavirus (HPV) infection following a type-specific genital HPV infection for the 9-valent vaccine HPV types and investigate factors associated with sequential infection among men who have sex with women (MSW). Methods. Genital and anal specimens were available for 1348 MSW participants, and HPV genotypes were detected using the Roche Linear Array assay. Sequential risk of anal HPV infection was assessed using hazard ratios (HRs) among men with prior genital infection, compared with men with no prior genital infection, in individual HPV type and grouped HPV analyses. Results. In individual analyses, men with prior HPV 16 genital infections had a significantly higher risk of subsequent anal HPV 16 infections (HR, 4.63; 95% confidence interval [CI], 1.41–15.23). In grouped analyses, a significantly higher risk of sequential type-specific anal HPV infections was observed for any of the 9 types (adjusted HR, 2.80; 95% CI, 1.32–5.99), high-risk types (adjusted HR, 2.65; 95% CI, 1.26, 5.55), and low-risk types (adjusted HR, 5.89; 95% CI, 1.29, 27.01). Conclusions. MSW with prior genital HPV infections had a higher risk of a subsequent type-specific anal infection. The higher risk was not explained by sexual intercourse with female partners. Autoinoculation is a possible mechanism for the observed association. PMID:27489298
Lee, Tae Hoon; Hwang, Soon Oh; Choi, Hyun Jong; Jung, Yunho; Cha, Sang Woo; Chung, Il-Kwun; Moon, Jong Ho; Cho, Young Deok; Park, Sang-Heum; Kim, Sun-Joo
2014-02-17
Numerous clinical trials to improve the success rate of biliary access in difficult biliary cannulation (DBC) during ERCP have been reported. However, standard guidelines or sequential protocol analysis according to different methods are limited in place. We planned to investigate a sequential protocol to facilitate selective biliary access for DBC during ERCP. This prospective clinical study enrolled 711 patients with naïve papillae at a tertiary referral center. If wire-guided cannulation was deemed to have failed due to the DBC criteria, then according to the cannulation algorithm early precut fistulotomy (EPF; cannulation time > 5 min, papillary contacts > 5 times, or hook-nose-shaped papilla), double-guidewire cannulation (DGC; unintentional pancreatic duct cannulation ≥ 3 times), and precut after placement of a pancreatic stent (PPS; if DGC was difficult or failed) were performed sequentially. The main outcome measurements were the technical success, procedure outcomes, and complications. Initially, a total of 140 (19.7%) patients with DBC underwent EPF (n = 71) and DGC (n = 69). Then, in DGC group 36 patients switched to PPS due to difficulty criteria. The successful biliary cannulation rate was 97.1% (136/140; 94.4% [67/71] with EPF, 47.8% [33/69] with DGC, and 100% [36/36] with PPS; P < 0.001). The mean successful cannulation time (standard deviation) was 559.4 (412.8) seconds in EPF, 314.8 (65.2) seconds in DGC, and 706.0 (469.4) seconds in PPS (P < 0.05). The DGC group had a relatively low successful cannulation rate (47.8%) but had a shorter cannulation time compared to the other groups due to early switching to the PPS method in difficult or failed DGC. Post-ERCP pancreatitis developed in 14 (10%) patients (9 mild, 1 moderate), which did not differ significantly among the groups (P = 0.870) or compared with the conventional group (P = 0.125). Based on the sequential protocol analysis, EPF, DGC, and PPS may be safe and feasible for DBC. The use of EPF in selected DBC criteria, DGC in unintentional pancreatic duct cannulations, and PPS in failed or difficult DGC may facilitate successful biliary cannulation.
64 slice MDCT generally underestimates coronary calcium scores as compared to EBT: A phantom study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greuter, M. J. W.; Dijkstra, H.; Groen, J. M.
The objective of our study was the determination of the influence of the sequential and spiral acquisition modes on the concordance and deviation of the calcium score on 64-slice multi-detector computed tomography (MDCT) scanners in comparison to electron beam tomography (EBT) as the gold standard. Our methods and materials were an anthropomorphic cardio CT phantom with different calcium inserts scanned in sequential and spiral acquisition modes on three identical 64-slice MDCT scanners of manufacturer A and on three identical 64-slice MDCT scanners of manufacturer B and on an EBT system. Every scan was repeated 30 times with and 15 timesmore » without a small random variation in the phantom position for both sequential and spiral modes. Significant differences were observed between EBT and 64-slice MDCT data for all inserts, both acquisition modes, and both manufacturers of MDCT systems. High regression coefficients (0.90-0.98) were found between the EBT and 64-slice MDCT data for both scoring methods and both systems with high correlation coefficients (R{sup 2}>0.94). System A showed more significant differences between spiral and sequential mode than system B. Almost no differences were observed in scanners of the same manufacturer for the Agatston score and no differences for the Volume score. The deviations of the Agatston and Volume scores showed regression dependencies approximately equal to the square root of the absolute score. The Agatston and Volume scores obtained with 64-slice MDCT imaging are highly correlated with EBT-obtained scores but are significantly underestimated (-10% to -2%) for both sequential and spiral acquisition modes. System B is more independent of acquisition mode to calcium score than system A. The Volume score shows no intramanufacturer dependency and its use is advocated versus the Agatston score. Using the same cut points for MDCT-based calcium scores as for EBT-based calcium scores can result in classifying individuals into a too low risk category. System information and scanprotocol is therefore needed for every calcium score procedure to ensure a correct clinical interpretation of the obtained calcium score results.« less
Somnam, Sarawut; Jakmunee, Jaroon; Grudpan, Kate; Lenghor, Narong; Motomizu, Shoji
2008-12-01
An automated hydrodynamic sequential injection (HSI) system with spectrophotometric detection was developed. Thanks to the hydrodynamic injection principle, simple devices can be used for introducing reproducible microliter volumes of both sample and reagent into the flow channel to form stacked zones in a similar fashion to those in a sequential injection system. The zones were then pushed to the detector and a peak profile was recorded. The determination of nitrite and nitrate in water samples by employing the Griess reaction was chosen as a model. Calibration graphs with linearity in the range of 0.7 - 40 muM were obtained for both nitrite and nitrate. Detection limits were found to be 0.3 muM NO(2)(-) and 0.4 muM NO(3)(-), respectively, with a sample throughput of 20 h(-1) for consecutive determination of both the species. The developed system was successfully applied to the analysis of water samples, employing simple and cost-effective instrumentation and offering higher degrees of automation and low chemical consumption.
Time scale of random sequential adsorption.
Erban, Radek; Chapman, S Jonathan
2007-04-01
A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Fan, X Q
2017-08-11
Retinoblastoma (RB) is the most common intraocular malignancy in childhood. It may seriously affect vision, and even threaten the life. The early diagnosis rate of RB in China remains low, and the majority of patients are at late phase with high rates of enucleation and mortality. The International Intraocular Retinoblastoma Classification and TNM staging system are guidances for therapeutic choices and bases for prognosis evaluation. Based on the sequential multi-method treatment modality, chemotherapy combined with local therapy is the mainstream in dealing with RB, which may maximize the results of eye saving and even vision retaining. New therapeutic techniques including supra-selective ophthalmic artery interventional chemotherapy and intravitreal chemotherapy can further improve the efficacy of treatment, especially the eye salvage rate. The overall level of RB treatment should be improved by promoting the international staging, new therapeutic techniques, and the sequential multiple modality treatment. (Chin J Ophthalmol, 2017, 53: 561 - 565) .
Hsieh, Tsung-Yu; Huang, Chi-Kai; Su, Tzu-Sen; Hong, Cheng-You; Wei, Tzu-Chien
2017-03-15
Crystal morphology and structure are important for improving the organic-inorganic lead halide perovskite semiconductor property in optoelectronic, electronic, and photovoltaic devices. In particular, crystal growth and dissolution are two major phenomena in determining the morphology of methylammonium lead iodide perovskite in the sequential deposition method for fabricating a perovskite solar cell. In this report, the effect of immersion time in the second step, i.e., methlyammonium iodide immersion in the morphological, structural, optical, and photovoltaic evolution, is extensively investigated. Supported by experimental evidence, a five-staged, time-dependent evolution of the morphology of methylammonium lead iodide perovskite crystals is established and is well connected to the photovoltaic performance. This result is beneficial for engineering optimal time for methylammonium iodide immersion and converging the solar cell performance in the sequential deposition route. Meanwhile, our result suggests that large, well-faceted methylammonium lead iodide perovskite single crystal may be incubated by solution process. This offers a low cost route for synthesizing perovskite single crystal.
Sequential decision tree using the analytic hierarchy process for decision support in rectal cancer.
Suner, Aslı; Çelikoğlu, Can Cengiz; Dicle, Oğuz; Sökmen, Selman
2012-09-01
The aim of the study is to determine the most appropriate method for construction of a sequential decision tree in the management of rectal cancer, using various patient-specific criteria and treatments such as surgery, chemotherapy, and radiotherapy. An analytic hierarchy process (AHP) was used to determine the priorities of variables. Relevant criteria used in two decision steps and their relative priorities were established by a panel of five general surgeons. Data were collected via a web-based application and analyzed using the "Expert Choice" software specifically developed for the AHP. Consistency ratios in the AHP method were calculated for each set of judgments, and the priorities of sub-criteria were determined. A sequential decision tree was constructed for the best treatment decision process, using priorities determined by the AHP method. Consistency ratios in the AHP method were calculated for each decision step, and the judgments were considered consistent. The tumor-related criterion "presence of perforation" (0.331) and the patient-surgeon-related criterion "surgeon's experience" (0.630) had the highest priority in the first decision step. In the second decision step, the tumor-related criterion "the stage of the disease" (0.230) and the patient-surgeon-related criterion "surgeon's experience" (0.281) were the paramount criteria. The results showed some variation in the ranking of criteria between the decision steps. In the second decision step, for instance, the tumor-related criterion "presence of perforation" was just the fifth. The consistency of decision support systems largely depends on the quality of the underlying decision tree. When several choices and variables have to be considered in a decision, it is very important to determine priorities. The AHP method seems to be effective for this purpose. The decision algorithm developed by this method is more realistic and will improve the quality of the decision tree. Copyright © 2012 Elsevier B.V. All rights reserved.
Photonic polymer-blend structures and method for making
Barnes, Michael D.
2004-06-29
The present invention comprises the formation of photonic polymer-blend structures having tunable optical and mechanical properties. The photonic polymer-blend structures comprise monomer units of spherical microparticles of a polymer-blend material wherein the spherical microparticles have surfaces partially merged with one another in a robust inter-particle bond having a tunable inter-particle separation or bond length sequentially attached in a desired and programmable architecture. The photonic polymer-blend structures of the present invention can be linked by several hundred individual particles sequentially linked to form complex three-dimensional structures or highly ordered two-dimensional arrays of 3D columns with 2D spacing.
Sequential infiltration synthesis for advanced lithography
Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing
2015-03-17
A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.
NASA Astrophysics Data System (ADS)
Zhaunerchyk, V.; Kamińska, M.; Mucke, M.; Squibb, R. J.; Eland, J. H. D.; Piancastelli, M. N.; Frasinski, L. J.; Grilj, J.; Koch, M.; McFarland, B. K.; Sistrunk, E.; Gühr, M.; Coffee, R. N.; Bostedt, C.; Bozek, J. D.; Salén, P.; Meulen, P. v. d.; Linusson, P.; Thomas, R. D.; Larsson, M.; Foucar, L.; Ullrich, J.; Motomura, K.; Mondal, S.; Ueda, K.; Richter, R.; Prince, K. C.; Takahashi, O.; Osipov, T.; Fang, L.; Murphy, B. F.; Berrah, N.; Feifel, R.
2015-12-01
Competing multi-photon ionization processes, some leading to the formation of double core hole states, have been examined in 4-aminophenol. The experiments used the linac coherent light source (LCLS) x-ray free electron laser, in combination with a time-of-flight magnetic bottle electron spectrometer and the correlation analysis method of covariance mapping. The results imply that 4-aminophenol molecules exposed to the focused x-ray pulses of the LCLS sequentially absorb more than two x-ray photons, resulting in the formation of multiple core holes as well as in the sequential removal of photoelectrons and Auger electrons (so-called PAPA sequences).
Zhaunerchyk, V.; Kaminska, M.; Mucke, M.; ...
2015-10-28
Competing multi-photon ionization processes, some leading to the formation of double core hole states, have been examined in 4-aminophenol. The experiments used the linac coherent light source (LCLS) x-ray free electron laser, in combination with a time-of-flight magnetic bottle electron spectrometer and the correlation analysis method of covariance mapping. Furthermore, the results imply that 4-aminophenol molecules exposed to the focused x-ray pulses of the LCLS sequentially absorb more than two x-ray photons, resulting in the formation of multiple core holes as well as in the sequential removal of photoelectrons and Auger electrons (so-called PAPA sequences).
Davletbaeva, Polina; Chocholouš, Petr; Bulatov, Andrey; Šatínský, Dalibor; Solich, Petr
2017-09-05
Sequential Injection Chromatography (SIC) evolved from fast and automated non-separation Sequential Injection Analysis (SIA) into chromatographic separation method for multi-element analysis. However, the speed of the measurement (sample throughput) is due to chromatography significantly reduced. In this paper, a sub-1min separation using medium polar cyano monolithic column (5mm×4.6mm) resulted in fast and green separation with sample throughput comparable with non-separation flow methods The separation of three synthetic water-soluble dyes (sunset yellow FCF, carmoisine and green S) was in a gradient elution mode (0.02% ammonium acetate, pH 6.7 - water) with flow rate of 3.0mLmin -1 corresponding with sample throughput of 30h -1 . Spectrophotometric detection wavelengths were set to 480, 516 and 630nm and 10Hz data collection rate. The performance of the separation was described and discussed (peak capacities 3.48-7.67, peak symmetries 1.72-1.84 and resolutions 1.42-1.88). The method was represented by validation parameters: LODs of 0.15-0.35mgL -1 , LOQs of 0.50-1.25mgL -1 , calibration ranges 0.50-150.00mgL -1 (r>0.998) and repeatability at 10.0mgL -1 of RSD≤0.98% (n=6). The method was used for determination of the dyes in "forest berries" colored pharmaceutical cough-cold formulation. The sample matrix - pharmaceuticals and excipients were not interfering with vis determination because of no retention in the separation column and colorless nature. The results proved the concept of fast and green chromatography approach using very short medium polar monolithic column in SIC. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Mercedes Berterretche; Andrew T. Hudak; Warren B. Cohen; Thomas K. Maiersperger; Stith T. Gower; Jennifer Dungan
2005-01-01
This study compared aspatial and spatial methods of using remote sensing and field data to predict maximum growing season leaf area index (LAI) maps in a boreal forest in Manitoba, Canada. The methods tested were orthogonal regression analysis (reduced major axis, RMA) and two geostatistical techniques: kriging with an external drift (KED) and sequential Gaussian...
NASA Astrophysics Data System (ADS)
Skarnemark, Gunnar; Allard, Stefan; Ekberg, Christian; Nordlund, Anders
2009-08-01
The need for engineers and scientists who can ensure safe and secure use of nuclear energy is large in Sweden and internationally. Chalmers University of Technology is therefore launching a new 2-year master's program in Nuclear Engineering, with start from the autumn of 2009. The program is open to Swedish and foreign students. The program starts with compulsory courses dealing with the basics of nuclear chemistry and physics, radiation protection, nuclear power and reactors, nuclear fuel supply, nuclear waste management and nuclear safety and security. There are also compulsory courses in nuclear industry applications and sustainable energy futures. The subsequent elective courses can be chosen freely but there is also a possibility to choose informal tracks that concentrate on nuclear chemistry or reactor technology and physics. The nuclear chemistry track comprises courses in e.g. chemistry of lanthanides, actinides and transactinides, solvent extraction, radioecology and radioanalytical chemistry and radiopharmaceuticals. The program is finished with a one semester thesis project. This is probably a unique master program in the sense of its combination of deep courses in both nuclear technology and nuclear chemistry.
Living systematic reviews: 3. Statistical methods for updating meta-analyses.
Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian
2017-11-01
A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.
The analysis of verbal interaction sequences in dyadic clinical communication: a review of methods.
Connor, Martin; Fletcher, Ian; Salmon, Peter
2009-05-01
To identify methods available for sequential analysis of dyadic verbal clinical communication and to review their methodological and conceptual differences. Critical review, based on literature describing sequential analyses of clinical and other relevant social interaction. Dominant approaches are based on analysis of communication according to its precise position in the series of utterances that constitute event-coded dialogue. For practical reasons, methods focus on very short-term processes, typically the influence of one party's speech on what the other says next. Studies of longer-term influences are rare. Some analyses have statistical limitations, particularly in disregarding heterogeneity between consultations, patients or practitioners. Additional techniques, including ones that can use information about timing and duration of speech from interval-coding are becoming available. There is a danger that constraints of commonly used methods shape research questions and divert researchers from potentially important communication processes including ones that operate over a longer-term than one or two speech turns. Given that no one method can model the complexity of clinical communication, multiple methods, both quantitative and qualitative, are necessary. Broadening the range of methods will allow the current emphasis on exploratory studies to be balanced by tests of hypotheses about clinically important communication processes.
Leontaridou, Maria; Gabbert, Silke; Van Ierland, Ekko C; Worth, Andrew P; Landsiedel, Robert
2016-07-01
This paper offers a Bayesian Value-of-Information (VOI) analysis for guiding the development of non-animal testing strategies, balancing information gains from testing with the expected social gains and costs from the adoption of regulatory decisions. Testing is assumed to have value, if, and only if, the information revealed from testing triggers a welfare-improving decision on the use (or non-use) of a substance. As an illustration, our VOI model is applied to a set of five individual non-animal prediction methods used for skin sensitisation hazard assessment, seven battery combinations of these methods, and 236 sequential 2-test and 3-test strategies. Their expected values are quantified and compared to the expected value of the local lymph node assay (LLNA) as the animal method. We find that battery and sequential combinations of non-animal prediction methods reveal a significantly higher expected value than the LLNA. This holds for the entire range of prior beliefs. Furthermore, our results illustrate that the testing strategy with the highest expected value does not necessarily have to follow the order of key events in the sensitisation adverse outcome pathway (AOP). 2016 FRAME.
Zhang, Lei; Zhao, Haiyu; Liu, Yang; Dong, Honghuan; Lv, Beiran; Fang, Min; Zhao, Huihui
2016-06-01
This study was conducted to establish the multicomponent sequential metabolism (MSM) method based on comparative analysis along the digestive system following oral administration of licorice (Glycyrrhiza uralensis Fisch., leguminosae), a traditional Chinese medicine widely used for harmonizing other ingredients in a formulae. The licorice water extract (LWE) dissolved in Krebs-Ringer buffer solution (1 g/mL) was used to carry out the experiments and the comparative analysis was performed using HPLC and LC-MS/MS methods. In vitro incubation, in situ closed-loop and in vivo blood sampling were used to measure the LWE metabolic profile along the digestive system. The incubation experiment showed that the LWE was basically stable in digestive juice. A comparative analysis presented the metabolic profile of each prototype and its corresponding metabolites then. Liver was the major metabolic organ for LWE, and the metabolism by the intestinal flora and gut wall was also an important part of the process. The MSM method was practical and could be a potential method to describe the metabolic routes of multiple components before absorption into the systemic blood stream. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Muik, Barbara; Edelmann, Andrea; Lendl, Bernhard; Ayora-Cañada, María José
2002-09-01
An automated method for measuring the primary amino acid concentration in wine fermentations by sequential injection analysis with spectrophotometric detection was developed. Isoindole-derivatives from the primary amino acid were formed by reaction with o-phthaldialdehyde and N-acetyl- L-cysteine and measured at 334 nm with respect to a baseline point at 700 nm to compensate the observed Schlieren effect. As the reaction kinetic was strongly matrix dependent the analytical readout at the final reaction equilibrium has been evaluated. Therefore four parallel reaction coils were included in the flow system to be capable of processing four samples simultaneously. Using isoleucine as the representative primary amino acid in wine fermentations a linear calibration curve from 2 to 10 mM isoleucine, corresponding to 28 to 140 mg nitrogen/L (N/L) was obtained. The coefficient of variation of the method was 1.5% at a throughput of 12 samples per hour. The developed method was successfully used to monitor two wine fermentations during alcoholic fermentation. The results were in agreement with an external reference method based on high performance liquid chromatography. A mean-t-test showed no significant differences between the two methods at a confidence level of 95%.
Modeling sustainability in renewable energy supply chain systems
NASA Astrophysics Data System (ADS)
Xie, Fei
This dissertation aims at modeling sustainability of renewable fuel supply chain systems against emerging challenges. In particular, the dissertation focuses on the biofuel supply chain system design, and manages to develop advanced modeling framework and corresponding solution methods in tackling challenges in sustaining biofuel supply chain systems. These challenges include: (1) to integrate "environmental thinking" into the long-term biofuel supply chain planning; (2) to adopt multimodal transportation to mitigate seasonality in biofuel supply chain operations; (3) to provide strategies in hedging against uncertainty from conversion technology; and (4) to develop methodologies in long-term sequential planning of the biofuel supply chain under uncertainties. All models are mixed integer programs, which also involves multi-objective programming method and two-stage/multistage stochastic programming methods. In particular for the long-term sequential planning under uncertainties, to reduce the computational challenges due to the exponential expansion of the scenario tree, I also developed efficient ND-Max method which is more efficient than CPLEX and Nested Decomposition method. Through result analysis of four independent studies, it is found that the proposed modeling frameworks can effectively improve the economic performance, enhance environmental benefits and reduce risks due to systems uncertainties for the biofuel supply chain systems.
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
Robustness of the sequential lineup advantage.
Gronlund, Scott D; Carlson, Curt A; Dailey, Sarah B; Goodsell, Charles A
2009-06-01
A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup advantages and 3 significant simultaneous advantages. Both sequential advantages occurred when the good photograph of the guilty suspect or either innocent suspect was in the fifth position in the sequential lineup; all 3 simultaneous advantages occurred when the poorer quality photograph of the guilty suspect or either innocent suspect was in the second position. Adjusting the statistical criterion to control for the multiple tests (.05/24) revealed no significant sequential advantages. Moreover, despite finding more conservative overall choosing for the sequential lineup, no support was found for the proposal that a sequential advantage was due to that conservative criterion shift. Unless lineups with particular characteristics predominate in the real world, there appears to be no strong preference for conducting lineups in either a sequential or a simultaneous manner. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Nonvolatile reconfigurable sequential logic in a HfO2 resistive random access memory array.
Zhou, Ya-Xiong; Li, Yi; Su, Yu-Ting; Wang, Zhuo-Rui; Shih, Ling-Yi; Chang, Ting-Chang; Chang, Kuan-Chang; Long, Shi-Bing; Sze, Simon M; Miao, Xiang-Shui
2017-05-25
Resistive random access memory (RRAM) based reconfigurable logic provides a temporal programmable dimension to realize Boolean logic functions and is regarded as a promising route to build non-von Neumann computing architecture. In this work, a reconfigurable operation method is proposed to perform nonvolatile sequential logic in a HfO 2 -based RRAM array. Eight kinds of Boolean logic functions can be implemented within the same hardware fabrics. During the logic computing processes, the RRAM devices in an array are flexibly configured in a bipolar or complementary structure. The validity was demonstrated by experimentally implemented NAND and XOR logic functions and a theoretically designed 1-bit full adder. With the trade-off between temporal and spatial computing complexity, our method makes better use of limited computing resources, thus provides an attractive scheme for the construction of logic-in-memory systems.
Crock, J.G.; Lichte, F.E.; Riddle, G.O.; Beech, C.L.
1986-01-01
The abundance of rare-earth elements (REE) and yttrium in geological materials is generally low, and most samples contain elements that interfere in the determination of the REE and Y, so a separation and/or preconcentration step is often necessary. This is often achieved by ion-exchange chromatography with either nitric or hydrochloric acid. It is advantageous, however, to use both acids sequentially. The final solution thus obtained contains only the REE and Y, with minor amounts of Al, Ba, Ca, Sc, Sr and Ti. Elements that potentially interfere, such as Be, Co, Cr, Fe, Mn, Th, U, V and Zr, are virtually eliminated. Inductively-coupled argon plasma atomic-emission spectroscopy can then be used for a final precise and accurate measurement. The method can also be used with other instrumental methods of analysis. ?? 1986.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
A sequential coalescent algorithm for chromosomal inversions
Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M
2013-01-01
Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894
Composite SAR imaging using sequential joint sparsity
NASA Astrophysics Data System (ADS)
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.
2017-06-01
This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Spatial-dependence recurrence sample entropy
NASA Astrophysics Data System (ADS)
Pham, Tuan D.; Yan, Hong
2018-03-01
Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.
Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2013-01-01
A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.
ERIC Educational Resources Information Center
Suizzo, Marie-Anne; Jackson, Karen Moran; Pahlke, Erin; McClain, Shannon; Marroquin, Yesenia; Blondeau, Lauren A.; Hong, KyongJoo
2016-01-01
In this mixed-methods study, we used an explanatory sequential design to investigate the processes through which parental involvement influences adolescents' achievement motivation. One hundred twenty low-income urban parents and their sixth-grade adolescents completed questionnaires, and a subsample of 11 mothers and 11 adolescents were…
A Mixed Methods Examination of the Influence of Dimensions of Support on Training Transfer
ERIC Educational Resources Information Center
Schindler, Laura A.; Burkholder, Gary J.
2016-01-01
The purpose of this mixed methods sequential explanatory study was to explore how specific dimensions of supervisor support (mentoring, coaching, social support, and task support) influence the transfer of learned knowledge and skills to the job. Quantitative data were collected from employees (N = 48) who develop curriculum at an educational…
ERIC Educational Resources Information Center
Thomson, Margareta Maria
2013-01-01
This study explored the U.S. prospective teachers' motivations for teaching, teaching goal development, and views of their commitment to teaching. A sequential explanatory mixed-methods design was employed. Participants (N = 61) completed a survey in which they rated the importance of various factors in their teaching career choice. Furthermore,…
ERIC Educational Resources Information Center
Howard, Keith E.; Curwen, Margie Sauceda; Howard, Nicol R.; Colón-Muñiz, Anaida
2015-01-01
The researchers examined the online social networking attitudes of underperforming Latino high school students in an alternative education program that uses technology as the prime venue for learning. A sequential explanatory mixed methods study was used to cross-check multiple sources of data explaining students' levels of comfort with utilizing…
ERIC Educational Resources Information Center
Power, Anne L.
2013-01-01
The purpose of this explanatory sequential mixed methods study is to explore faculty and administrator perspectives of faculty merit pay compensation systems in private, higher education institutions. The study focuses on 10 small, private, four-year institutions which are religiously affiliated. All institutions are located in Nebraska, Iowa, and…
Methods for Selecting Phage Display Antibody Libraries.
Jara-Acevedo, Ricardo; Diez, Paula; Gonzalez-Gonzalez, Maria; Degano, Rosa Maria; Ibarrola, Nieves; Gongora, Rafael; Orfao, Alberto; Fuentes, Manuel
2016-01-01
The selection process aims sequential enrichment of phage antibody display library in clones that recognize the target of interest or antigen as the library undergoes successive rounds of selection. In this review, selection methods most commonly used for phage display antibody libraries have been comprehensively described. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
ERIC Educational Resources Information Center
Jeong, Allan
2005-01-01
This paper proposes a set of methods and a framework for evaluating, modeling, and predicting group interactions in computer-mediated communication. The method of sequential analysis is described along with specific software tools and techniques to facilitate the analysis of message-response sequences. In addition, the Dialogic Theory and its…
ERIC Educational Resources Information Center
Alsuwaileh, Bader Ghannam; Russ-Eft, Darlene F.; Alshurai, Saad R.
2016-01-01
The research herein used a sequential mixed methods design to investigate why academic dishonesty is widespread among the students at the College of Basic Education in Kuwait. Qualitative interviews were conducted to generate research hypotheses. Then, using questionnaire survey, the research hypotheses were quantitatively tested. The findings…
ERIC Educational Resources Information Center
Greenfield, Renée A.
2016-01-01
This study followed a mixed-methods sequential explanatory design. Phase I involved the collection of quantitative data to examine inservice teachers' (N = 69) attitudes about language and linguistic diversity as well as their teacher education coursework. All participants were graduates from the same teacher education program. Phase II included…
ERIC Educational Resources Information Center
Quarmby, T.; Dagkas, S.; Bridge, M.
2011-01-01
This mixed method paper explored the effect of family structure on children's physical activities and sedentary pursuits. It furthers the limited understanding of how family structure impacts on children's time in, and reasons behind engaging in, certain physical activities. Children from three inner city comprehensive schools in the Midlands,…
ERIC Educational Resources Information Center
Barak, Miri
2017-01-01
Changes in our global world have shifted the skill demands from acquisition of structured knowledge to mastery of skills, often referred to as twenty-first century competencies. Given these changes, a sequential explanatory mixed methods study was undertaken to (a) examine predominant instructional methods and technologies used by teacher…
ERIC Educational Resources Information Center
Griffin, Kimberly A.; Bennett, Jessica C.; Harris, Jessica
2011-01-01
In this article, the authors demonstrate how researchers can integrate qualitative and quantitative methods to gain a deeper understanding of the prevalence and nature of cultural taxation among black professors. In doing so, they show how the impact of cultural taxation on the experiences of black faculty in the academy is best captured using…
The Effect of English Language Learning on Creative Thinking Skills: A Mixed Methods Case Study
ERIC Educational Resources Information Center
Sehic, Sandro
2017-01-01
The purpose of this sequential explanatory mixed-methods case study was to investigate the effects of English language learning on creative thinking skills in the domains of fluency, flexibility, originality, and elaboration as measured with the Alternate Uses Test. Unlike the previous research studies that investigated the links between English…
ERIC Educational Resources Information Center
Azano, Amy; Missett, Tracy C.; Callahan, Carolyn M.; Oh, Sarah; Brunner, Marguerite; Foster, Lisa H.; Moon, Tonya R.
2011-01-01
This study used sequential mixed-methods analyses to investigate the effectiveness of a research-based language arts curriculum for gifted third graders. Using analytic induction, researchers found that teachers' beliefs and expectations (time, sense of autonomy, expectations for students, professional expertise) influenced the degree to which…
Teachers' Adoptation Level of Student Centered Education Approach
ERIC Educational Resources Information Center
Arseven, Zeynep; Sahin, Seyma; Kiliç, Abdurrahman
2016-01-01
The aim of this study is to identify how far the student centered education approach is applied in the primary, middle and high schools in Düzce. Explanatory design which is one type of mixed research methods and "sequential mixed methods sampling" were used in the study. 685 teachers constitute the research sample of the quantitative…
ERIC Educational Resources Information Center
Smart, Julie B.
2014-01-01
This mixed-methods study examined the relationship between middle level science students' perceptions of teacher-student interactions and students' science motivation, particulary their efficacy, value, and goal orientation for learning science. In this sequential explanatory design, quantitative and qualitative data were collected in two phases,…
Resistance of various shiga toxin-producing Escherichia coli to electrolyzed oxidizing water
USDA-ARS?s Scientific Manuscript database
The resistance of thirty two strains of Escherichia coli O157:H7 and six major serotypes of non-O157 Shiga toxin- producing E. coli (STEC) plus E. coli O104 was tested against Electrolyzed oxidizing (EO) water using two different methods; modified AOAC 955.16 sequential inoculation method and minim...
Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J
2012-03-08
Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Dutch Trial Register NTR2168.
Lichtenhan, J T; Hartsock, J; Dornhoffer, J R; Donovan, K M; Salt, A N
2016-11-01
Administering pharmaceuticals to the scala tympani of the inner ear is a common approach to study cochlear physiology and mechanics. We present here a novel method for in vivo drug delivery in a controlled manner to sealed ears. Injections of ototoxic solutions were applied from a pipette sealed into a fenestra in the cochlear apex, progressively driving solutions along the length of scala tympani toward the cochlear aqueduct at the base. Drugs can be delivered rapidly or slowly. In this report we focus on slow delivery in which the injection rate is automatically adjusted to account for varying cross sectional area of the scala tympani, therefore driving a solution front at uniform rate. Objective measurements originating from finely spaced, low- to high-characteristic cochlear frequency places were sequentially affected. Comparison with existing methods(s): Controlled administration of pharmaceuticals into the cochlear apex overcomes a number of serious limitations of previously established methods such as cochlear perfusions with an injection pipette in the cochlear base: The drug concentration achieved is more precisely controlled, drug concentrations remain in scala tympani and are not rapidly washed out by cerebrospinal fluid flow, and the entire length of the cochlear spiral can be treated quickly or slowly with time. Controlled administration of solutions into the cochlear apex can be a powerful approach to sequentially effect objective measurements originating from finely spaced cochlear regions and allows, for the first time, the spatial origin of CAPs to be objectively defined. Copyright © 2016 Elsevier B.V. All rights reserved.