Today’s Pre control and Statistical Process Control (SPC)

Pre control and Statistical Process Control (SPC) are key tools for determining the process that goes into a product.

SPC is a key ingredient of Quality. It is an important step in reducing nonconformities and defects in any manufacturing process. SPC charts help to detect assignable causes of a process change in a timely fashion. SPC helps to identify root causes and take corrective actions before the development of the product has reached such a stage that carrying changes out is neither practicable nor useful.

Examples of process changes that SPC helps to detect include trends, shifts and variation. Three items are needed for SPC to meet its goal:

o  A system that measures effectiveness in real-time

o  Tolerance that is practical and is connected to customer keenness and satisfaction

o  A dial indicator that comes with an anticipated response.

All these help SPC to determine whether a process is stable and requires no adjustment, is incapable of performing its functions altogether, or is deviating but capable.

And now, pre-control

Pre-control, on the other hand, inspects the units and adjusts the process and the succeeding sampling procedures assuming where the measurements are placed in relation to the specification limits. The focus of pre-control is individual measurements.

It uses a set of probabilities, based on assumed distributions and the location of the process, to estimate where there is a justification for the process adjustments. Since decisions concerning pre-control are based broadly, i.e., on the area in which the measurements; it obviates the need for charting, as it is very responsive to the process signals right from the start.

SPC or pre-control?

There are arguments for and against the use of SPC and pre-control as an effective means of ensuring that the process is right and that it results in the desired quality for the product.

At a webinar that is being organized by Compliance4All, a leading provider of professional trainings for all the areas of regulatory compliance, the speaker, Jd Marhevko, who has been involved in Operations and Quality/Lean/Six Sigma efforts across a variety of industries for more than 25 years, will explain all the aspects of pre-control. To hear her perspective of pre-control, please register for this webinar by visiting http://www.compliance4all.com/control/w_product/~product_id=501074?Linkedin-SEO

What makes this webinar special is that it has consistently ranked in the top 1-5% at previous conferences at more than five venues. It was featured in ASQ QMD’s special edition of the Quality Management Forum’s 2015 Spring edition (ASQ-QM.org). A webinar of this topic was provided in 2015 via the ASQ QMD Linkage Technical Committee to over 1300 respondents through the IMA and ASQ QMD.

Tools needed for pre-control

At this hour-long session, Jd will explain all the elements of pre-control in Quality. she will show to participants the way of drafting and creating a pre-control chart. She will run a process in which to model the next steps and decisions.

The aim of this session is to equip participants with the knowledge needed for reducing the complexity of the system and bringing about an improvement in the effectiveness and efficiency of their Quality Management Systems.

A session packed with interaction and practical application of principles

A major component of this webinar on SPC and pre-control is that Jd will share the result of case studies. The knowledge gained at this webinar can be applied immediately at their work in respect to the following:

–       Measurement System Analysis (MSA): Jd will conduct a high level overview of MSA. This will help the participants get a grasp of the need for putting an effective measuring system in place ahead of implementing pre-control

–       Cpk Overview: To help participants gain baseline capability in advance of implementation of pre-control, a high level Cpk overview will be conducted

–       Normal Distribution: The way in which the cumulative distribution function of the normal distribution is to be used for estimating and establishing the zones on a pre-control chart

–       Pre-Control Chart: Jd will show participants how to apply the concepts listed above with the use of a mock pre-control chart where the process will be demonstrated based on the “go/no go” zones that are established.

At this webinar, Jd will cover the following areas of pre-control:

o  Reduce process complexity and minimize risk

o  Increase affectivity of a Core Tool

o  Increase personnel compliance in proactive process management.

https://www.isixsigma.com/tools-templates/control-charts/using-control-charts-or-pre-control-charts/

http://www.symphonytech.com/articles/pdfs/precontrol.pdf

http://www.winspc.com/what-is-spc/ask-the-expert/400-pre-control-no-substitute-for-statistical-process-control

http://www.qualitymag.com/articles/86794-pre-control-may-be-the-solution

Understanding and handling payment issues

A financial organization, or an organization involved in any business for that matter, faces the prospect of receiving duplicate, fraudulent or late payments. These are the typical payment issues an organization is likely to face at some point of time in its business.

Payment issues are something almost no organization is likely to be free from. Duplicate invoice payments, just one of the payment issues an organization is likely to face, account for losses of something like $100 million over a three-year period just for medium sized organizations. The amount is likely to be several times higher for large companies and those in the public sector, which most likely deal with billions of dollars in transactions.

Payment issues have their implications

The consequence of payment issues, be they duplicate, fraudulent or late payments, is whopping. It is likely to lead to business losses, because late payments, for instance, hinder investment into other productive activities by businesses. Although there are a number of sources at which payment issues can happen; it usually takes an organization quite a while to detect any payment issue, which could be duplicate, fraudulent or late payments. It also takes herculean efforts at times to get to the bottom of the payment issues.

Payment issues can happen due to a number of reasons

There is any number of reasons for which payment issues could arise. Manual data entry and processing, possible overlooking of characters while entering Accounts Payable (AP) or by Automated Clearinghouses (ACH) processors, oversights by manual checks and overlapping or duplication of payments while making payments from varied sources are just some of the reasons for which payment issues can occur with businesses.

Although the Sarbanes Oxley (SOX) Act has put in a number of checks and balances into the payment aspect of corporations; there are still a good number of loopholes that need to be e plugged if payment issues have to be addressed. How do organizations, especially those in finance, mitigate payment issues? What steps do they need to take to understand the regulations set out by the SOX Act, or take their own measures to prevent payment issues arising out of duplicate, fraudulent or late payments?

Learn the aspects of payment issues at a learning session

All these will be addressed at a very valuable learning session on this topic. The webinar, being organized by Compliance4All, a leading provider of professional trainings for all the areas of regulatory compliance, will have Ray Graber as speaker.

Ray is a senior BFSI professional who brings a deep and thorough understanding of banking, technology, and finance. To hear from him on how to understand and address payment issues such as duplicate, fraudulent or late payments; just register for this webinar by logging on to http://www.compliance4all.com/control/w_product/~product_id=501138?Wordpress-SEO

Insights for understanding payment issues

At this webinar, Ray will help participants understand how to foresee payments issues and strategize solutions. He will offer suggestions about how to put risk management plans in place to do this. The suggestions Ray will offer at this webinar will help participants from banks and corporations to get a clearer understanding of each other’s concerns and constraints, and ways of addressing them.

This session will arm them with the tools necessary for accurately auditing their existent processes and limit the potential for fraud. He will teach them how to understand the settlement process, which is part of the banking business. In other words, attending this session will equip participants with the insight needed for understanding payment issues and tackle them in relation to duplicate, fraudulent or late payments.

At this session, Ray will cover the following areas:

o  Payment System Risk Policy

o  FFIEC Action Summary for Retail Payments

o  Areas of Risk

o  Risk Assessment Activities

o  People, Processes, and Products

o  Is there an optimal organizational structure/for managing payments strategy?

o  Are there best practices that apply to my institution?

o  What are the hurdles in establishing an organization focused on the payments business?

o  Are there common pitfalls?

http://www.infor.com/content/whitepapers/detecting-prev-dup-invoice.pdf/

Understanding normality tests and normality transformations

That the inputted data should be “normally distributed” is a requirement of the calculations used in many statistical tests and methods. Typically, the methods used for Student’s t-Tests, ANOVA tables, F-tests, Normal Tolerance limits, and Process Capability Indices include such calculations.

A core criterion for ensuring the correct results is the one that the raw data used in such calculations be “normally distributed”. It is in view of this fact that the assurance that the FDA holds a company’s “valid statistical techniques” being “suitable for their intended use” as a critical component for making the assessment of whether or not data is “normally distributed”.

Now, which are the types of data that are normally distributed? Dimensional data, such as length, height and width are considered as being typically normally distributed. Which kinds of data are usually non-normal? The typical example of non-normal data are burst pressure, tensile strength, and time or cycles to failure. Some non-normal data can be transformed or converted into normality data for the purpose of validating statistical calculations, when they are run on the transformed data.

 

Get an understanding of the normally distributed and normality transformation data

Statistics professionals can get a thorough understanding of the concepts of what are normally distributed and how to transform non-normal data into normality at a webinar that is being organized by Compliance4All, a leading provider of professional trainings for the areas of regulatory compliance. At this webinar, John N. Zorich, a senior consultant for the medical device manufacturing industry, will be the Speaker. To register, please visit

http://www.compliance4all.com/control/w_product/~product_id=501103?Wordpress-SEO

John will take the participants of this webinar through all the aspects of “normally distributed” data and how to transform non-normal data. He will explain the various elements of data, such as:

o  What it means to be “normally distributed”

o  How to assess normality

o  How to test for normality

o  How to transform non-normal data into normal data

o  How to justify the transformations to internal and external quality system auditors.

A combination of graphical and numerical methods that have been in use for many years constitute Normality Tests and normality transformations. These methods are essential whenever one has to apply a statistical test or method whose fundamental assumption is that the inputted data is normally distributed. John will offer an explanation of these methods.

What goes into normality testing and normality transformation?

While normality “testing” involves creating a “normal probability plot” and calculating simple statistics for comparison to critical values in published tables; making simple changes to each of the raw-data values, such that the resulting values are more normally distributed than the original raw data, is what a normality “transformation” involves.

Professionals whose work involves normality testing, such as QA/QC Supervisor, Process Engineers, Manufacturing Engineers, QC/QC Technicians, Manufacturing Technicians, and R&D Engineers will find this session very helpful. John will offer guidance on both objective and some subjective decision-making that go into the evaluation of the results of “tests” and “transformations”.

John will cover the following areas at this webinar:

o  Regulatory requirements

o  Binomial distribution

o  Historical origin of the Normal distribution

o  Normal distribution formula, histogram, and curve

o  Validity of Normality transformations

o  Necessity for transformation to Normality

o  How to use Normality transformations

o  Normal Probability Plot

o  How to evaluate Normality of raw data and transformed data

o  Significance tests for Normality

o  Evaluating the results of a Normality test

o  Recommendations for implementation

o  Recommended reference textbooks.

International Financial Reporting Standards (IFRS) 6

The International Financial Reporting Standards (IFRS) standards are a set of standards pertaining to different industries and their activities and practices. IFRS 6 relates to guidance in the accounting practices of the extractive industries, such as oil, mining and gas. The IFRS 6 accounting standard states the requirements, as well as the disclosures that need to go into accounting practices for expenses that a company incurs during the course of exploring and evaluating expenditures.

Till the enactment of the IFRS 6, regulations on the accounting practices of the extractive industries were fragmented and piecemeal. The major change the IFRS 6 brought about it is that it consolidated these practices. Also, with the passage of IFRS 6, entities that were using accounting practices for exploration and evaluation assets that were in use prior to the enactment of the IFRS 6 could integrate these earlier practices with the provisions of the IFRS 6.

Core accounting requirements

One of its core requirements is that of the issuance of IFRS compliant financial statements by companies that have assets used for exploration and evaluation of mineral resources.

So, it is imperative for accounting professionals to have full knowledge of the IFRS 6. Working with the oil, mining or gas areas in companies that have assets that are used for exploration and valuation of mineral resources and being successful entails having to comply with the requirements set out by the IFRS 6.

A proper understanding of the IFRS 6

A learning session that is being organized by Compliance4All, a leading provider of professional trainings for the areas of regulatory compliance, will offer the learning needed for getting trained on how to comply with the requirements set out in IFRS 6.

At this webinar, Mike Morley A Certified Public Accountant and business author who organizes various training programs, such as IFRS, SOX, and Financial Statement Analysis that focus on providing continuing education opportunities for finance and accounting professionals, will be the speaker.

Professionals who work in the oil, mining or gas areas in companies that have assets that are used for exploration and valuation of mineral resources can gain insights into what the IFRS 6 means for them by enrolling for this webinar. To register, please visit

http://www.compliance4all.com/control/w_product/~product_id=501194?Wordpress-SEO

Familiarization with all the aspects of the IFRS 6

The aim of this presentation is to help oil; mining and gas professionals become knowledgeable about the latest information about IFRS 6. The speaker will familiarize participants with the unique accounting and reporting issues, particularly in regards to the evaluation of assets, revenues and expenditures that professionals in the extractive industries, involved in the search for mineral resources, including oil, gas, minerals, and similar exhaustible resources face.

Accounting professionals who work in these industries, and who need in-depth understanding of the way the IFRS 6 is structured, and the ways in which they need to apply the standards in the right manner, such as Auditors, Accountants, Financial Managers, Financial Controllers, Company Executives, and anyone involved in the SOX compliance process, will benefit immensely from this webinar on the accounting practices set out by IFRS 6.

At this session on the IFRS 6, the speaker will cover the following areas:

o  Why the accounting for this sector is different

o  How resource assets are evaluated

o  Special rules for measuring revenues and expenditures

o  How revaluation rules apply to the Oil, Gas, and Mining industries

o  Other specific requirements of IFRS 6

o  Required disclosures.

http://www.accaglobal.com/in/en/student/exam-support-resources/dipifr-study-resources/technical-articles/ifrs6.html

http://www.icaew.com/en/library/subject-gateways/accounting-standards/ifrs/ifrs-6

https://www.iasplus.com/en/standards/ifrs/ifrs6

Analyzing financial statements is an indispensable insight for managers

Financial statements are the ultimate indicator of a company’s financial health. Number crunching is a very important exercise that all executives at all levels of an organization need to be familiar with. Yet, given the heavy jargon that goes into financial statements and the complexity most of them have; many managers feel put off and don’t generally like to pore over financial statements.

The company’s financial statement is intended to provide insights into the most important aspect of the business –the financial one –to managers and executives at all levels and in all disciplines. Marketing, finance, HR, customer service, and sales need financial statements to get a grasp of and gain perspective of the financial health of the organization.

Financial statements are critical for helping understand the business

Despite financial statements being the surest indicator of the most important aspect of any business organization –Finance –most managers lack the perceptiveness needed to understand and analyze the meaning of numbers. It is often that they devote some much time to running their business that the priority that needs to be accorded to understanding financial statements gets buried and takes a backseat.

A perceptive analysis of financial statements is the foundation to getting the business in order. Wading through the numbers helps the organization to dig into the market trends, understand where they are getting it right or wrong, and then use financial statements to draw proper conclusions and take appropriate action. It is important to understand financial statements for another critical reason: The competition should not understand our financial statements faster and better than we do!

Trend and ration analysis of financial statements

But how does one make sense of heaps and heaps of seemingly unintelligible numbers? Numbers in themselves, without the necessary nous to decipher them, make little sense to any executive. A few techniques do exist to help understand the meaning of numbers. An effective model for assessing the financial condition and results of operations of any business is that of using trend and ratio analysis. Getting a grasp of this model will empower financial and other executive teams to derive the maximum benefit that accrues from a crystal clear understanding of financial statements.

Imparting this understanding is the intent of a webinar that is being organized by Compliance4All, a leading provider of professional trainings for all areas of regulatory compliance. Miles Hutchinson, an experienced CGMA and business adviser, will be the speaker at this session.

In easily comprehensible terms, he will explain how participants can imbibe the sagacity needed to quickly and thoroughly analyze the financial condition and results of operations of any publicly traded company. All that is needed to gain this highly useful understanding of financial statements is to register for this webinar by logging on to

http://www.compliance4all.com/control/w_product/~product_id=501197LIVE?wordpress-SEO

Attending this highly useful session on financial statements gives Financial Executives, HR Managers, Accounting Managers, Department Managers, and Business Unit Managers the ability to discern numbers and help understand where these numbers lead the organization to.

These are the areas this webinar on financial statements will cover:

o  Review the components of the annual report of a prominent publicly traded company and learn how to use this wealth of information

o  Use the annual report to perform a fundamental financial analysis

o  Learn the various types of financial analysis and their purpose

o  Learn the key ratios to evaluate a company’s liquidity, leverage and operating performance

o  Identify the key benchmarks to help determine whether a company’s ratios are in line with competitors

o  Understand horizontal and vertical analysis and how they can be used to identify key trends

o  Bonus: receive our advanced excel hosted financial model complete with all ratios, horizontal and vertical analysis

o  Use our model to perform financial analysis on other company financial statements, including yours

o  Receive benchmark information to use in determining the quality of your analyses

Learn about resources available to perform comparative studies between companies in the same economic sector – even private companies.

The Attribute Agreement Analysis

Humans can be calibrated, although most people like to think otherwise. The commonly used standard, Attribute Agreement Analysis, or what is called AAA, is a handy tool in helping to do this. At its barest, Attribute Agreement Analysis is a method in which the level of agreement or conformance between the appraisal made by the appraiser(s) and the standard is assessed. Then, the elements used for the appraisal that have the highest levels of disagreement with the standard are identified.

The two methods of the Attribute Agreement Analysis

The Attribute Agreement Analysis uses two primary methods of assessing the agreement of the attribute with the standard:

–       The percentage or extent to which the appraisals agree with the standard

–       Kappa statistics, or the percentage or extent to which adjustment is made between the agreement between the appraisals and the standard and the percentage of agreement that happens by chance

The three aspects of Attribute Agreement Analysis

Attribute Agreement Analysis has three aspects: Agreement with oneself, Agreement to a peer, and Agreement to the standard. When calibrating humans, the use of Attribute Agreement Analysis calls for control plans that need to be in put in place for “MSA” analysis on key processes. An AAA may be described as a “Measurement Systems Analysis” (MSA) for attributes.

The Attribute Agreement Analysis method is useful to auditing professionals, to whom it makes sense to understand the effectiveness of these methods when these are used by their clientele and/or in their own organization.

Gain learning of Attribute Agreement Analysis

The ways by which Attribute Agreement Analysis can be comprehended and used effectively will be the learning a webinar being organized by Compliance4All, a provider of cost-effective regulatory compliance trainings for a wide range of regulated industries, is offering.

The speaker at this webinar is Jd Marhevko, Vice President of Quality and Lean for Accuride Corporation, who has been involved in Operations and Quality/Lean/Six Sigma efforts across a variety of industries for more than 25 years. To gain insights into the inner aspects of Attribute Agreement Analysis, please register for this webinar by logging on to

http://www.compliance4all.com/control/w_product/~product_id=501073?Wordpress-SEO

The “Statistical AAA” and the Kappa value

At this webinar, Jd will review both the “Statistical AAA” and the Kappa value, as well as the confidence levels for the result bands and incorporation of AAA into the Control Plan and frequency of calibration.

She will assess the pros and cons while discussing the general benefits of reductions in arguments (what is good or not) internal/ external rework, returns, premium freight, etc.

A number of uses from the Attribute Agreement Analysis method

This explanation will help participants understand ways by which they can apply this tool while learning how to bring down business costs. Jd will evaluate the benefits of human calibration by reviewing the three basic types of agreements.

The important learning this session will give is that it will enable participants to learn the ways of developing, creating, executing and interpreting an Attribute Agreement Analysis so that an accurate and repeatable disposition can be made and rework and returns can be effectively reduced.

At this webinar on Attribute Agreement Analysis, which will be highly useful to professionals such as Quality and Engineering system practitioners, Directors, Engineers, Analysts and Managers, Jd will cover the following areas:

o  To help people understand how AAA can be effectively utilized for mitigating business loss

o  Increased understanding of how to actually perform the analysis

o  Build confidence in the ability to calibrate a human operator.

http://support.minitab.com/en-us/minitab/17/Assistant_Attribute_Agreement_Analysis.pdf

http://support.minitab.com/en-us/minitab/17/topic-library/quality-tools/measurement-system-analysis/attribute-agreement-analysis/what-is-an-attribute-agreement-analysis-also-called-attribute-gage-r-r-study/

Demonstrating Product Reliability

Product Reliability is among the most important attributes for a product, no matter what kind of product one is considering. Product Reliability can be defined as the likeliness or probability of the product performing its stated purpose, in the light of the conditions under which it is going to be put, for a defined period of time.

The parameters used for quantification of Product Reliability are:

o  MTBF or Mean Time Between Failures for products that can be repaired and used, and

o  MTTF or (Mean Time To Failure) for products that cannot be repaired.

If these are the parameters used for quantifying Product Reliability, how does one predict it? Quality professionals use a relatable analogy to explain how to assure measurability of Product Reliability. This is known as the bathtub curve analogy, where the start or early life of the product is placed at the start of the bathtub. As the product starts getting used, it moves on to the phase of its useful life, from where it moves on to its wear out time.

The time taken for each of these processes is the indication of Product Reliability. Product reliability using the bathtub curve analogy suggests that at the beginning of the product lifecycle, the probability of failure rate is relatively less. As the product progresses on to its next phases, the probability of failure rate increases. This is the root of the understanding of predicting Product Reliability. Mathematical formulae are used to describe these.

How does on get Product Reliability right?

The ways of choosing the right reliability parameters for failure rates of respective products and estimating their lifecycle in the light of failure rates will be the topic of a webinar that is being organized by Compliance4All, a leading provider of professional trainings for all the areas of regulatory compliance.

At this webinar, Steven Wachs, Principal Statistician at Integral Concepts, Inc. where he assists manufacturers in the application of statistical methods to reduce variation and improve quality and productivity, will be the speaker. To benefit from the experience Steven brings into Quality, please register for this webinar by logging on to

 

http://www.compliance4all.com/control/w_product/~product_id=501164?wordpress-SEO

A description of the various approaches to Product Reliability

Steven will offer and explain several approaches that can be used to verify whether reliability targets or specifications have been achieved at the desired level of confidence. In particular, he will describe the approaches using time-to-failure data to estimate reliability metrics.

Also taken up at this session, which will be of immense use to anyone with a vested interest in product quality and reliability, such as Product Engineers, Reliability Engineers, Design Engineers, Quality Engineers, Quality Assurance Managers, Project/Program Managers, and Manufacturing Personnel are demonstration tests, where minimum reliability may be demonstrated with zero or few failures.

A description of the methods that increase the risk of failures

Steven will discuss the kind of methods, which when used, increase the risks of field failures due either to inadequate designs or misconstruction of product use conditions that need to be managed. He will also offer the options for verifying and demonstrating that customer reliability requirements have been achieved.

Steven will cover the following areas at this webinar:

o  Overview of Reliability

o  Reliability Metrics and Specifications

o  Estimating Reliability with Time-to-Failure Data

o  Confidence Intervals and Bounds

o  Demonstrating Reliability with zero or few failures

o  Tradeoffs between Testing Time and Sample Size

o  Impact of Assumptions on Test Plans

o  Improving Demonstration Test Power

http://ftp.automationdirect.com/pub/Product%20Reliability%20and%20MTBF.pdf