No one can believe this update [MIT vision system]

It generates a “visual roadmap” — basically, collections of visual data points arranged as coordinates.


Robotic vision is already pretty good, assuming that it’s being used within the narrow bounds of the application for which it’s been designed. That’s fine for machines that perform a specific movement over and over, such as picking an object off of an assembly line and placing it into a bin. However for robots to become useful enough to not just pack boxes in warehouses but actually help out around our own homes, they’ll have to stop being so myopic. And that’s where the MIT’s “DON” system comes in.

The DON or “Dense Object Nets” is a novel form of machine vision developed at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). It generates a “visual roadmap” — basically, collections of visual data points arranged as coordinates. The system will also stitch each of these individual coordinate sets together into a larger coordinate set, the same way your phone can mesh numerous photos together into a single panoramic image. This enables the system to better and more intuitively understand the object’s shape and how it works in the context of the environment around it.


“At its coarsest, highest level, what you’d get from your computer vision system is object detection,” PhD student Lucas Manuelli, author of the paper, told Engadget. “The next finest level would be to do pixel labeling. So that would say, okay, all these pixels are a part of a person or part of the road or the sidewalk. Those first two levels are pretty much a lot of what self-driving car systems would use.”

“But if you’re actually trying to interact with an object in a particular way like grab a shoe in a particular way or grab a mug,” he continued, “then just having a bounding box or just all these pixels correspond to the mug, isn’t enough. Our system is really about getting into the finer level of details within the object… that kind of information is necessary for doing more advanced manipulation tasks.”


That is, the DON system will allow a robot to look at a cup of coffee, properly orient itself to the handle, and realize that the bottom of the mug needs to remain pointing down when the robot picks up the cup to avoid spilling its contents. What’s more, the system will allow a robot to pick a specific object out of a pile of similar objects.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” Manuelli, wrote in the study. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

The system relies on an RGB-D sensor which has a combination RGB-depth camera. Best of all, the system trains itself. There’s no need to feed the AI hundreds upon thousands of images of an object to the DON in order to teach it. If you want the system to recognize a brown boot, you simply put the robot in a room with a brown boot for a little while. The system will automatically circle the boot, taking reference photos which it uses to generate the coordinate points, then trains itself based on what it’s seen. The entire process takes less than an hour.

Really awesome you also don’t believe this via

Medtech firms gets personal with [digital twins]

What started as an evolution is accelerating toward more of a revolution.

Using this “digital twin” that mimics the electrical and physical properties of the cells in patient 7497’s heart, Meder runs simulations to see if the pacemaker can keep the congestive heart failure sufferer alive – before he has inserted a knife.

The digital heart twin developed by Siemens Healthineers (SHLG.DE) is one example of how medical device makers are using artificial intelligence (AI) to help doctors make more precise diagnoses as medicine enters an increasingly personalized age.

The challenge for Siemens Healthineers and rivals such as Philips PHL.AS and GE Healthcare is to keep an edge over tech giants from Alphabet’s (GOOGL.O) Google to Alibaba (BABA.N) that hope to use big data to grab a slice of healthcare spending.

With healthcare budgets under increasing pressure, AI tools such as the digital heart twin could save tens of thousands of dollars by predicting outcomes and avoiding unnecessary surgery.

A shortage of doctors in countries such as China is also spurring demand for new AI tools to analyze medical images and the race is on to commercialize products that could shake up healthcare systems around the world.


While AI has been used in medical technology for decades, the availability of vast amounts data, lower computing costs and more sophisticated algorithms mean revenues from AI tools are expected to soar to $6.7 billion by 2021 from $811 million in 2015, according to a study by research firm Frost & Sullivan

The size of the global medical imaging analytics software market is also expected to jump to $4.3 billion by 2025 from $2.4 billion in 2016, said data portal Statista

“What started as an evolution is accelerating toward more of a revolution,” said Thomas Rudolph who leads McKinsey & Company’s pharma and medical technology practice in Germany.


For Siemens Healthineers and its traditional rivals, making the transition from being mainly hardware companies to medical software pioneers is seen as crucial in a field becoming increasingly crowded with new entrants.

Google has developed a raft of AI tools, including algorithms that can analyze medical images to diagnose eye disease, or sift through digital records to predict the likelihood of death.

Alibaba, meanwhile, hopes to use its cloud and data systems to tackle a shortage of medical specialists in China. It is working on AI-assisted diagnosis tools to help analyze images such as CT scans and MRIs.Siemens Healthineers, which was spun off from German parent Siemens (SIEGn.DE) in March, has outpaced the market in recent quarters with sales of medical imaging equipment thanks to a slew of new products.

But analysts say the German firm, Dutch company Philips and GE Healthcare, a subsidiary of General Electric (GE.N), will all come under pressure to prove they can save healthcare systems money as spending becomes more linked to patient outcomes and as hospitals rely on bulk purchasing to push for discounts.

Siemens Healthineers has a long history in the industry. It made the first industrially manufactured X-ray machines in 1896 and is now the world’s biggest maker of medical imaging equipment.

Now, Chief Executive Bernd Montag’s ambition is to transform it into the “GPS of healthcare” – a company that harnesses its data to sell intelligent services, as well as letting smaller tech firms develop Apps feeding off its database.

As it adapts, Siemens Healthineers has invested heavily in IT. It employs some 2,900 software engineers and has over 600 patents and patent applications in machine learning.

It is not alone. Philips says about 60 percent of its research and development (R&D) staff and spending is focused on software and data science. The company said it employs thousands of software engineers, without being specific.

Dont try to miss this Digital heart twin developed by Siemens Healthineers


Technology centre what I learned here

No secret that good enough is usually good for customers in India. Most companies have learned that the hard way.

“We never go home, we’re usually here all night,” says Dominic Mason, head of product design and development (Environmental Control) at Dyson Technologies.

Technology companies don’t usually look and feel as geeky as you would expect. There’s a healthy dose of marketing and communications professionals, finance etc. It’s usually more “corporate” than “engineering”.

But not with Dyson. Walking around the company’s facilities in Singapore, you feel like you’re amongst a bunch of geeks, who would rather solve equations than sign cheques.

Of course, it would be ridiculous to think that Dyson runs only on engineers, but it feels like there’s a disproportionate dose of that. The company’s Singapore offices have an 80-20 ratio between engineers and other employees – finance, HR etc.

Walking in, I found myself in a soundproof acoustic lab, where Dyson tests its products for noise. Here, only the floor reflects sound, while the rest of the chamber is designed to avoid echo. The walls absorb any sound above 100 Hz.

Dyson invested £10 million into this acoustic facility alone, where a team of four engineers work, led by Nicklaus Yu, Senior Acoustics and Vibration Engineer for Dyson. As Nicklaus shows us his lab, the acoustic chamber, you can literally feel the glee that only a geek can exhibit when talking tech. When he’s done talking, he points to an air purifier at our feet, which was turned on all along, but no one in the room heard it.

It’s his ‘et voila!’ moment.

He’s delighted, in a way that only engineers can be when laymen marvel at their creations.

Lab after lab, engineers demonstrated different elements of Dyson’s technologies, waxing eloquent about them. And every few sentences, they would stop and remember that it’s laymen they’re talking to. We eventually found ourselves facing a blacked out room, where future products are being developed, and it’s out of bounds for most, including those sitting right in front of it.


But, how do you sell geek to Indians?

It’s no secret that good enough is usually good for customers in India. Most companies have learned that the hard way, accepted that and moved on. Refusing to budge, usually means hurting your bottom line.

You could simply make “good enough” products and call them excellent. That’s marketing. But somehow, Mason agrees that his purifiers are struggling against extreme pollution, like what we have in Delhi. On being asked how he’s going to deal with that, he says he’s collecting data right now, which will help him answer that question in the future.

Dont miss here to Continue

Tight (security) comes with an annoying compromise

Better security is great, but unfortunately, the T2 coprocessor isn’t without problem.

The launch of the 2018 MacBook Pro has been rife with controversy, with issues ranging from the performance to the keyboard. While we’re at it, let’s throw one more log on the fire, shall we?

The new MacBook Pros come with what Apple calls the T2 coprocessor — a chip first featured in the iMac Pro. Although its main reason for inclusion is Siri voice activation, it also has important implications on security and storage. Better security is great, but unfortunately, the T2 coprocessor isn’t without problem.

The return of the T2

The T2 coprocessor brings all sorts of security features to the MacBook Pros. In its press release, Apple says it has “support for secure boot” and “on-the-fly encrypted storage,” two features that first came when the T2 showed up in last year’s iMac Pro. These security features might not sound like a big deal, but they’ll have a much larger effect on users than activating Siri with your voice.

Apple’s never been all that forthcoming about the exact processes these chips control, but there are a few things we know the T2 does handle. That includes Boot-up, storage, and the Touch Bar/Touch ID. Not only are these processes the Intel CPU and third-party controllers no longer must handle, it keeps them protected in Apple’s closed system of stopgaps.

A great example is the boot-up process, which is now partially handled by the T2. As detailed in initial reports about the coprocessor in the iMac Pro, the T2 verifies everything about the system before it’s allowed to move forward. As soon as the Apple logo appears, the T2 is in control, and acts as Apple’s “root of trust” to ensure that everything checks out.

Encrypted storage is equally important. Because the functions of the conventional disk controller have been replaced by the T2, the coprocessor now has direct control over the storage in your MacBook Pro.

That kind of access allows Apple to ensure every piece of data in the SSD is automatically protected and encrypted. That lets Apple to do things like secure your biometric data outside of the SSD. Right now, that’s just the TouchID sensor, but in the future that could include something like FaceID.

However, some reports were made to bring these new security features to the MacBook Pro.

More at

Ecosystem Play to Generate $100T by 2028, Accenture Says

The survey found that about 50% of business leaders say they have already built or are currently building an ecosystem to respond to disruption, and another 10% more are seeking to build one.

Digital disruption can be a scary thing. You either innovate by building a digital platform that leverages big data and emerging tech like AI, blockchain, and the Internet of Things, or you get disrupted by somebody who can. But the good news is you don’t have to go it alone. In fact, according to a new report from Accenture Strategy, companies that leverage their surrounding ecosystems to build disruptive products and services will generate a mind-boggling $100 trillion in value over the next 10 years.

That eyebrow-raising assessment was delivered recently in a new research paper from Accenture Strategy titled “Cornerstones of Future Growth: Ecosystems.” The report is based in part on a survey of 1,252 business leaders that Accenture Strategy conducted to ascertain how they’re evolving business models to handle the potential negative and positive aspects of digital disruption. The survey found that about 50% of business leaders say they have already built or are currently building an ecosystem to respond to disruption, and another 10% more are seeking to build one.

Which begs the questions: Just what exactly is an “ecosystem,” how can a company build one, and in what way can it help?

An ecosystem, according to Accenture Strategy is:

“…The network of cross-industry players who work together to define, build and execute market-creating customer and consumer solutions…The power of the ecosystem is that no single player need own or operate all components of the solution, and that the value the ecosystem generates is larger than the combined value each of the players could contribute individually.”

Brick and mortar retailers, for example, are leveraging an ecosystem when they expand their reach to customers by selling products through online portals, such as Amazon or EBay. Hospitals can also tap into the ecosystem train by using rideshare services, such as Uber or Lyft, to help move patients to and from appointments.


We’ve definitely seen our share of digital disruption over the past 10 to 15 years. Physical stores selling books, toys, and music are few and far between, and thanks to Amazon’s $1-billion acquisition of PillPack, the neighborhood drug store could be next. Uber, which doesn’t own any cars but does have a popular ride-sharing app, is worth an estimated $50 billion, the same amount as General Motors, which made 3 million cars last year.

But according to Accenture Strategy’s survey, the potential disruption is just getting started. The survey found that 76% of business leaders say current business models will be unrecognizable in five years, and that the rise of the ecosystem play will be the main culprit.

Big data and technological innovation will play central roles in the ecosystem play. Accenture Strategy cites the partnership between Microsoft and GE and the companies’ integration of the Azure and Predix platforms as a good example of an ecosystem at work. A budding ecosystem that includes Google and Wal-Mart, similarly, is designed to make it easier for customer to order products via AI-powered Google Assistant, Accenture Strategy points out.

In some ways, the ecosystem concept bares some similarity to the “innovation chains” that Forrester analyst Brian Hopkins has explored in his research lately. Hopkins says that companies that can successfully link together disparate but related technologies (such as big data analytics, AI, distributed ledgers, IoT, cloud, and quantum computing) have a better chance at creating “breakthrough innovation” than those who lack the experience and expertise in those technologies.

While business executives seem to agree that leveraging ecosystems will be a key to future survival, many of them don’t know how to pull it off. According to the Accenture Strategy survey, only 40% of respondents said they have the capacity and experience to build, monitor, and manage an ecosystem at the moment.

Part of the problem with the ecosystem play is that companies are loathe to give up control, which must happen for an ecosystem to be successful. Accenture Strategy says sharing data is “essential” to sustaining an ecosystem, but adds that 44% of executives are hesitant to share company assets or secrets. Investment in data governance capabilities was identified as a critical need for enabling safe data sharing.

Pursuing an ecosystem play can often mean working with one’s business adversary. Turning these competitors into “frenemies” is a good way to head off the risk of business disruption, said Accenture Strategy Managing Director Oliver Wright.

“Due to increasing market pressure, we’re likely to see more companies – particularly those that have traditionally been competitors – join forces as they look to create new growth and achieve competitive agility,” Wright stated in a press release. “‘Coopetition’ will continue to grow and exciting partnerships will form as a result, some of which have already remade markets and industries around the world.”

Michael Lyman, the senior managing director for Accenture Strategy, says companies can no longer create sustainable growth by going it alone. “They need the help of partners to form ecosystems to innovate and create new customer propositions, expand their customer base and enter new markets,” he said in a press release.

Ecosystem capabilities are not evenly distributed across industries. Telecommunication firms, banks, and utilities have the strongest ecosystem capabilities today, while companies in the insurance, healthcare, and travel industries are the weakest.


Introduction of the new system is causing some concern over the privacy

Currently applicants have to fill in a form online, print it out and take it to the post office so their identification can be verified.

A NEW controversial system may soon see welfare recipients required to have their face scanned and analysed before they can access their payments.

The system, which will also affect people trying access Medicare and childcare subsidies, age pension and pay tax online, is part of a new biometric security program that is set to begin in October.

Under the new strategy those trying to access these government services will be required to take photo to create a myGov ID, which will then be checked against driver’s licences and passports to confirm their identity.

Human Services Minister Michael Keenan has hopes the plan will see Australia become a world leader in “digital government” by 2025.


When fully rolled out the digital identity solution will allow users access to almost any government agency through one single portal, with the trial allowing 100,000 people to apply for a Tax File number online.

Currently applicants have to fill in a form online, print it out and take it to the post office so their identification can be verified.

But the introduction of the new system is causing some concern over the privacy of those taking part.

IT security expert Troy Hunt, who runs the website, told that a biometric system — like the one proposed — wasn’t without its faults.

“One of the problems is we want to be able to access things in a secure fashion but passwords aren’t really great for doing that because a lot of us tend to use the same one for everything,” he said.

“Biometrics can be better in this aspect but on the flip side it is information that can’t really be changed if there is a security breach.”

Mr Hunt said that once a database is built up of this biometric data then there was the possibility it could be used for reasons other than it’s intended purpose. For example having a scan of people’s faces on file could make it easier to identify or track people through security camera.

Technology Development

Improving Customer Experience with AI

Human beings don’t categorize content in the same way – and discrepancies and misunderstandings in categorization can make customer feedback useless.

A myriad of customer service channels exist today, such as social media, email, chat services, call centers, and voice mail. There are so many ways that a customer can interact with a business and it is important to take them all into account.

Customers or prospects who interact via chat may represent just one segment of the audience, while the people that engage via the call center represent another segment of the audience. The same might be said of social media channels like Twitter and Facebook.

Each channel may offer a unique perspective from customers – and may provide unique value for business leaders eager to improve their customer experience. Understanding and addressing all channels of unstructured text feedback is a major focus for natural language processing applications in business – and it’s a major focus for Luminoso.

Luminoso founder Catherine Havasi received her Master’s degree in natural language processing from MIT in 2004, and went on to graduate with a PhD in computer science from Brandeis before returning to MIT as a Research Scientist and Research Affiliate. She founded Luminoso in 2011.

In this article, we ask Catherine about the use cases of NLP for understanding customer voice – and the circumstances where this technology can be most valuable for companies.

Why Customer Voice Needs Artificial Intelligence

Making sense of the meaning in customer or user feedback (through phone calls, chat, email, social media, etc) is valuable for nearly any business. The challenge lies in finding this meaning at scale, and across so many different data formats.

Catherine tells us that, historically, businesses manage these different customer interactions by putting them into appropriate “buckets” or categories. For example, if there are 70,000 customer support email messages received in a particular month, the company might have a manual process of flagging each message as “refund request,” “billing inquiry,” “purchase request,” etc.

However, manual categorization becomes nearly impossibly challenging at scale, for a number of reasons:

  • While all customer service emails and call center calls might be labelled manually by the customer support rep who handles then, other kinds of data (tweets, chat messages, comments on online forums) may never receive the same kind of labelling.
  • A company with pre-determined “buckets” (categories) for customer service inquiries is unlikely to be able to pick up on new, emerging trends in the particular words, issues, or phrases used in customer requests. This inability to adapt and find new patterns could limit the company from seeing new opportunities for improvement, or new emerging issues for important customer segments.
  • Human beings don’t categorize content in the same way – and discrepancies and misunderstandings in categorization can make customer feedback useless.

Many companies look to technology that can detect common patterns for these messages, create categories for each found pattern, and flag them appropriately for the attention of the business owners (which includes finding new patterns). This is a job for machine learning.

“Sentiment analysis” – the process of computationally identifying and categorizing opinions expressed in a piece of text – has become a somewhat familiar term. Catherine tells us that truly understanding customer voice involves much more than simply detecting emotions within text, and includes:

  • Finding new “entities” (products, brands, people) which are gaining or losing frequency in customer feedback
  • Determining customer sentiment – not just overall – but in relation to specific entities or types of customer issues
  • Showing changes and trends in customer feedback over time
  • Understanding the different patterns of feedback across unique channels (call center, chat messages, social media, etc)

The problem is many text analytics techniques in the past require a significant amount of data and effort in building rules and anthologies in the beginning, and still be unable to provide a true picture of what is actually being said by the customers.

Here you go for full post