10 Useful AI & ML Slides

The core of problem-solving is intellectual thinking, which no machine, no matter how sophisticated it is, can replicate.


According to the motto: “A picture says more than a thousand words” some useful slides with a short explanation are shown below.

1. Evolution of Analytics

AISOMA - Evolution of Analytics
AISOMA – Evolution of Analytics

Analytics is the discovery, interpretation, and communication of meaningful patterns in data; and the process of applying those patterns towards effective decision making. In other words, analytics can be understood as the connective tissue between data and effective decision making, within an organization. Especially valuable in areas rich with recorded information, analytics relies on the simultaneous application of statistics, computer programming and operations research to quantify performance.

Organizations may apply analytics to business data to describe, predict, and improve business performance. Specifically, areas within analytics include predictive analytics, prescriptive analytics, enterprise decision management, descriptive analytics, cognitive analytics, Big Data Analytics, retail analytics, supply chain analytics, store assortment and stock-keeping unit optimization, marketing optimization and marketing mix modeling, web analytics, call analytics, speech analytics, sales force sizing and optimization, price and promotion modeling, predictive science, credit risk analysis, and fraud analytics. Since analytics can require extensive computation (see big data), the algorithms and software used for analytics harness the most current methods in computer science, statistics, and mathematics.

2. Future of Data Science

AISOMA - Future of Data Science
AISOMA – Future of Data Science

Sebastian Raschka, researcher of applied Machine Learning and Deep Learning at Michigan State University, thinks that the future of Data Science does not indicate machines taking over humans, but rather human data professionals embracing open-source technologies.

It is common understanding that future Data Science projects, thanks to advanced tools, will scale to new heights where more human experts will be required to handle highly complex tasks very efficiently. However, according to McKinsey Global Institute (MGI), the next decade will witness a sharp shortage of around 250,000 Data Scientists in the U.S. alone. The question is whether machines can ever enable seamless collaboration between technologies, tools, processes, and end users. Automated tools and assistants can aid the human mind to accomplish tasks more quickly and accurately, but machines cannot ever be expected to substitute for human thinking. The core of problem-solving is intellectual thinking, which no machine, no matter how sophisticated it is, can replicate.

3. Machine Learning Workflow

AISOMA - Machine Learning Workflow
AISOMA – Machine Learning Workflow

4. Deep Learning Workflow

AISOMA - Deep Learning Workflow
AISOMA – Deep Learning Workflow

For More You can get here

How to Find the Right “DIETARY SUPPLEMENTS” for Your Specific Product (Service)

These have to match what is specified in the master manufacturing record.

Compliance4All, a leading provider of professional training for all areas of regulatory compliance, is organizing a 90-minute webinar on the topic, “Dietary Supplements CGMPS – 21 CFR 111 Compliance”, on February 5. John E. Lincoln, a medical device and Regulatory Affairs consultant, will be the expert at this webinar.

Please visit http://bit.ly/2DIk8lf to enroll for this webinar.


How to find the right dietary supplements for your specific product (service) can be quite a challenge, if you are one of the players in the industry. The reason is this: manufacturers, labelers and packagers of dietary supplements, and even those who hold them, have to comply with the requirements set out in 21 CFR Part 111.

21 CFR Part 111, or what is called the “DS CGMP rule”, requires those who manufacture, package, label and stock dietary supplements to ensure the quality of the product by adhering strictly to the packaging and labeling requirements set out in this Part. These have to match what is specified in the master manufacturing record.

Till the FDA published the Dietary Supplements CGMPs as a “Final Rule”, which brought 21 CFR 111 into existence, the Quality Management Systems and controls on dietary supplements were loose and voluntary in nature. About the only requirement was the one set out by the Dietary Supplement Health and Education Act of 1994 (DSHEA), by which the Congress defined what is meant by a “dietary supplement”, and only required that every supplement be labeled a dietary supplement.

Complying with 21 CFR 111 is not optional

All that has changed with 21 CFR 111. To start with, the FDA now has a set of regulations for dietary supplements that is different from what is set out for conventional foods and drug products.  And then, players in the dietary supplements field that fail to comply with these requirements can have their products termed “adulterated” or “misbranded” by the FDA.

Despite the introduction of this Part, considerable confusion abounds in the industry as to just what type of manufacturing controls and record keeping, and labeling content the FDA requires, with the result that this Part continues to be a regulatory sore point for many new and established companies in the industry.

It is this confusion that this webinar seeks to clarify. John E. Lincoln will help participants of this session resolve these issues.

John will explain all the aspects of FDA Part 111, which will include Quality Management System/Quality Assurance/Quality Controls, personnel, facilities, equipment, software controls, production and Process Controls, holding and distribution, complaints and returns, and records.

At this 90-minute webinar, which is aimed at benefiting Senior Management in the dietary supplements industry, QA/RA, R&D, Consultants, those in Engineering and Marketing, those tasked with Product, Process, Validations and CGMP responsibilities, as well as other interested consumer groups, medical and other healthcare professionals, staff and office personnel, and start-ups, John will cover the following areas:

  • History of Dietary Supplement regulation in the U.S
  • The Dietary Supplement Health and Education Act (DSHEA)
  • The key requirements of the Dietary Supplements CGMPs, 21 CFR 111
  • Required steps for CGMP compliance
  • Problem areas, common pitfalls
  • Implementation: Systems, templates and tools.


About the speaker:

John E. Lincoln is a graduate of UCLA. His experience also includes managing pilot production, regulatory affairs, product development/design control, 510(K) submissions, risk management per ISO 14971, and projects. He brings over 28 years of experience in the FDA-regulated medical products industry, during which he has worked with companies ranging from startups to Fortune 100 companies, including Abbott Laboratories, Hospira, Tyco/Mallinckrodt.

which is designed to show if the viewers of your site or online

which contain frequently used static IPs — such as ones for a home or office — and a street address for that person or household.

Attribution is one of those services that seems to take place inside a black box, where all kinds of algorithms make all kinds of assumptions.

To help make the box less obscure, Andover, Massachusetts-based Semcasting is out today with The Attributor, which it describes as the first self-service deterministic attribution platform.

The platform, CEO/founder Ray Kingman told me, is transparent about its processes, and it makes clear when matches are made and when not. The company has been offering this process as a professional service for the last year, and now it’s available in a self-service platform with usage pricing.

How it works. The company specializes in audience targeting via IP address, and that’s a key element of The Attributor, which is designed to show if the viewers of your site or online ad made a purchase or took some other desired action.

Kingman suggested the following typical use case for The Attributor.

Let’s say AutoTrader.com wants to determine which car buyers went to its site, so it can show a possible causal connection to auto dealerships that advertise on AutoTrader. In addition to web site visitors, each Study could also have up to four other audiences as the “base” for a match, such as emails or profiles in a customer relationship management system.

In the case of site visitors, Semcasting converts the web traffic logs from AutoTrader.com to street addresses of the web visitors. It does this by matching static IP addresses of visitors to its graph of user profiles, which contain frequently used static IPs — such as ones for a home or office — and a street address for that person or household.

If the IP is a dynamic one, such as used by mobile towers, the Semcasting platform collects time signals to determine the latitude/longiture of the towers, and it then can match the device ID that passed by those towers at that time, to the street address.

The Attributor is integrated with a variety of sales-oriented databases, such as the Dealer Management System that compiles all car sales in realtime. Through the Dealer Management System, the dealership in question provides a list of street addresses for all its car buyers in, say, the last month, and the match is made. Semcasting says it doesn’t keep any first-party personal data, such as the buyers’ names.

Other audience segments or “tactics” could also be employed to match the IP addresses of visitors to web sites or to those who were delivered ad impressions. These can include email addresses of prospects, purchases recorded in customer relationship management systems, or similar data. Once the data is onboarded, Kingman added, the platform does the rest.

He said the 85–90 percent match rate is about 70 to 80 percent accurate. If the IP address is for, say, an apartment building, the platform makes a probabilistic determination of the likely tenant, unless there is other data.

Why this matters.
Attribution helps to determine if your ad or other marketing spend has been effective for the desired outcome, such as purchases.

Probabilistic attribution makes a variety of assumptions and combines probabilities. By contrast, a deterministic attribution is more definite and can be more accurate, as it matches a persistent identifier like a street address. But deterministic attribution is usually conducted via a professional service, so a self-serve, usage-based platform can make this approach easier and less expensive.

What percent of individuals who received our CRM emails https://goo.gl/iiFHK6

Google’s change to Chrome’s login has ignited “debate” about whether the move was sneaky

The privacy model is simply broken. Companies are constantly changing the rules of the game.

Google’s surprise change to a privacy setting in its popular Chrome web browser is raising hackles from privacy advocates and some users of the product who say that the company has not been upfront enough.

The change, which was little noticed until a security researcher blogged about it on Sunday night, has left the internet company fighting a familiar criticism: that its appetite for data to fuel its online ad business trumps its concerns about its users.

Matthew Green, a security and cryptography researcher from Johns Hopkins University blogged about the change Google quietly made as part of the browser’s latest update, Chrome 69. Green wrote that from now on, when people login in to YouTube, Gmail or any of the company’s properties, they will automatically be logged in to Chrome at the same time.

Late on Sunday night, Google responded to the growing controversy by confirming the login change.


This is dramatic change and a possible threat to users’ privacy, according to Green.

“Google believes they can make these changes without consequence,” said Marc Rotenberg, the president of consumer privacy advocacy group EPIC. “The privacy model is simply broken. Companies are constantly changing the rules of the game.”

What’s all the fuss about?

For years, Google allowed users of its Chrome browser to surf the web without logging in through a personal Google account. Chrome users didn’t have to worry that their web browsing history would be included with the other personal data Google maintains about registered users of its products. For that to happen, a user would have to sign in to Chrome and to consent to a “data sync” between Chrome and the other Google products they use.

Now that Google logs people in to Chrome automatically, managers have  removed one of those steps of protection, Green wrote. What’s more, he said, a new and “confusing” sync-consent page, makes it easy for users to mistakenly give up their browsing data to Google.

Eric Lawrence, a former Google employee who worked on Chrome but is now employed by rival Microsoft, said he doesn’t see any reason to be alarmed.

“Yes, Chrome has streamlined the opt-in to the browser’s “Sync” features, such that you no longer need to individually type your username and password when enabling Sync,” Lawrence wrote. “Whether you consider this “Great!” or “Terrible!” is a matter of perception and threat model.”

Lawrence points out that when someone clicks the consent button, they will then get a pop-up that informs them of the information they are agreeing to share with Google.

In that prompt, Google notifies users that the company will collect info from users’ “bookmarks, passwords, history and more on all your devices…Google may use content on sites you visit, plus browser activity and interactions to personalize Chrome and other Google services like Translate, Search and ads.”

‘My heart skips a beat’

Plenty of people wrote that they don’t see this as a benign change, including former Googlers. Michał Zalewski, is a computer security expert and former Google employee. He sided with Green that Google has made Chrome less safe.

“Don’t like to pile on,” Zalewski wrote on Twitter, “but I did rely on that as a visual confirmation that the browser is not doing something I didn’t want. Now, my heart skips a beat every time I see the profile-switch menu or chrome://settings – and it’d only take one mis-click to actually start syncing.”

Huge changes in these updates very helpful  http://bit.ly/2NBhAMC Dont miss it

SQL database aimed at real-time processing of Internet of Things

The upgrade also targets SQL developers who previously relied on NoSQL approaches to handle machine data applications.

Startup Crate.io’s strategy of advancing an open source scale-out SQL database as an alternative to complex NoSQL versions for handling fast-moving machine data appears to be paying dividends with the close of an early funding round and the release of an upgraded version of its platform.

Crate.io this week announced the close of a Series A funding round that garnered $11 million. The round was led by Zetta Venture Partners and Deutsche Invest Equity. Among the other investors is Solomon Hykes, founder of application container pioneer Docker. The funds will be used to accelerate development and adoption of commercial and open source versions of the CrateDB machine data platform, the company said Tuesday (June 19).

The San Francisco-based startup also released the third version of it open source database emphasizing time-series storage and analytics for industrial and other users dealing with large volumes of machine-generated data. The upgrade also targets SQL developers who previously relied on NoSQL approaches to handle machine data applications.

The upgrade also targets users seeking to harness data generated by connected factory equipment along with smart buildings and vehicles. “The capability for real-time processing of machine data [was] a key constraint in many Industry 4.0 endeavors,” noted Torsten Kreindl, managing partner at Deutsche Invest Venture Capital.Industry 4.0 refers to factory automation efforts that incorporate data analytics into manufacturing technologies.

5 Myths About Artificial Intelligence (AI) You Must Stop Believing

Crate.io said it is addressing the requirements with an upgraded platform that includes faster data ingestion and real-time analytics as well as data visualizations. Along with SQL, it loads JSON and other data points in a variety of structures, including nested objects and arrays.

Meanwhile, data platform administration is based on a cloud-native micro-services approach managed around the Kubernetes cluster orchestrator.