BenchSci raises $22 million for AI-powered drug discovery tools

BenchSci raises $22 million for AI-powered drug discovery tools

Assessing the health of pharmaceutical R&D by unearthing hidden patterns in procurement data is a task made simpler by AI. At least, that’s the pitch given by David Qixiang Chen, Elvis Wianda, Liran Belenzon, and Tom Leung, who cofounded BenchSci in 2015. The Toronto, Canada-based biotech company taps AI to run experiments that accelerate drug discovery with the aim of increasing the speed and quality of medical research. This week, in a sign of confidence from new and existing backers, BenchSci raised $22 million in series B funding, bringing its total raised to $45 million.

Coinciding with the infusion of capital, BenchSci announced the launch of its new AI-assisted reagent selection product and expanded a contract with Novartis in a market it estimates is worth more than $10.2 billion per year. CEO Belenzon says the funds will be used to further develop BenchSci’s product suite and expedite drug testing.

“The pharmaceutical industry is facing a productivity crisis. R&D costs per drug keep rising while revenue is stagnant. Without major change, this crisis will affect everyone. Low or negative returns will reduce investment in new drugs,” said Belenzon. “Artificial intelligence promises to reverse the trend. But most AI in drug discovery is unproven. BenchSci’s products, on the other hand, have immediate, quantifiable impact.”

BenchSci’s marquee offering — an antibody selection service — employs machine learning to select antibodies in as little as 30 seconds (versus the 12 weeks antibody selection typically takes). The company claims it reduces consumable costs by up to $3 million per year by cutting down on inappropriate antibodies, with features that support search by protein targets and filtering by technique and 16 other experimental variables (including organism, tissue, cell type, and disease).

BenchSci search

Above: BenchSci search

Belenzon says it’s often difficult for vendors to predict how an antibody will behave in experiments; up to 50% of selected antibodies don’t work, and data on antibody use is buried in biomedical papers, vendor catalogs, and independent validation databases. According to a study published in Nature, researchers can often spend $50,000 on unnecessary antibodies that take up to three to six months to develop, and it then takes days to select the antibodies and weeks to test and validate them.

BenchSci’s image recognition technology extracts antibody specifications from published experiments using AI — not just vendor names, product names, or SKUs. The system applies bioinformatics and ontologies to link antibodies to use cases while providing access to catalog data of more than 7.7 million products from 231 vendors, as well as literature-wide trends on antibody usage across technique, species, and more.

BenchSci draws on real-world experiment data from 10 million scientific publications, including closed-access papers. The results, the company says, are independently validated by organizations like the Human Protein Atlas, Encode, and EuroMAbNet, as well as scientific publishers such as Springer Nature and Wiley.

According to Belenzon, BenchSci now powers reagent selection for more than 31,000 researchers at over 3,600 academic institutions and 15 of the top 20 pharmaceutical companies. He says Novartis will be among the first customers to deploy the aforementioned AI-assisted reagent selection tool, which covers antibodies and recombinant proteins.

F-Prime Capital led this latest funding round, with participation from Northleaf Capital Partners and existing investors, including Gradient Ventures, Inovia Capital, Golden Ventures, and Real Ventures. F-Prime senior vice president Shervin Ghaemmaghami will join BenchSci’s board of directors.

Sign up for Funding Weekly to start your week with VB’s top funding stories.

Source: BenchSci raises $22 million for AI-powered drug discovery tools

New developments for healthcare startups

New developments for healthcare startups

Following Google’s partnership with Ascension late last year, Bright.md released its first Health Care Consumer Trust Survey. The survey revealed that while patients may say they want a health care experience that rivals the convenience and service they’ve become accustomed to with eCommerce, they aren’t yet putting their trust in large retailers or technology companies. At the same time, virtual care is growing in popularity. Other key findings include:

  • Consumers overwhelmingly prefer their own doctor or hospital to provide health care services, however, more than half of patients trust ANY provider or hospital to provide care.
  • Only one in four patients trust their insurance companies to provide health care, though nearly half do trust insurance companies to handle their personal health information.
  • Patients are still wary of technology companies’ and retailers’ abilities to protect their private data, and put greater trust in doctors, hospitals and insurance companies.
  • Virtual care is growing in popularity.

“Competition for attracting and retaining patients is at an all-time high. Patients want on-demand access to information and care that tech giants and retailers promise, but they aren’t yet ready to rely on those companies for their health care needs,” said Dr. Ray Costantini, M.D., Bright.md’s CEO and co-founder. “For now, patients clearly prefer to use their own or other providers, who they are most likely to entrust with their care and personal information. These patient sentiments underscore the tremendous opportunity for health care systems and providers to maintain their current competitive advantage by delivering the convenience and access consumers have learned to expect in other parts of their lives.”


This week, a partnership between NeuroFlow and athenahealth went live, creating an opportunity for more than 160,000 healthcare providers to integrate validated behavioral health data into their workflow. It’s a major step forward for “collaborative care”, a model that fuses together physical and mental health care practices.


Vineti, a San Francisco-based software company that develops software for  cell and gene therapy firms, has closed a $35 million Series C funding round. Cardinal Health, one of the three largest pharmaceutical distributors in the country, led the round, with participation from a division of Swiss drugmaker Novartis and Gilead Sciences’ Kite Pharma.  Click here to read more.


We are excited to announce that we have extended the deadline for INVEST Pitch Perfect in Chicago to February 14. If you are a healthcare startup focused on diagnostics, medical devices, biopharma, health IT or health IT services, and meet the criteria, we’d love for you to apply. Click here to check out the criteria and submit an application for the event, which takes place April 21-22 at the Ritz Carlton.


Check out our library of eBooks highlighting startups and contextualizing developments in healthcare innovation from AI in healthcare to revenue cycle management. Click here for your reading pleasure.

Photo: akindo, Getty Images

Source: StartUPDATES: New developments for healthcare startups

AI, 5G, and IoT can help deliver the promise of precision medicine

AI, 5G, and IoT can help deliver the promise of precision medicine

When my son was a toddler, he went to his pediatrician for a routine CAT scan. Easy stuff. Just a little shot to subdue him for a few minutes. He’d be awake and finished in a jiffy.

Except my son didn’t wake up. He lay there on the clinic table, unresponsive, his vitals slowly falling. The clinic had no ability to diagnose his condition. Five minutes later, he was in the back of an ambulance. My wife and I were powerless to do anything but look on, frantic with worry for our boy’s life.

It turned out that he’d had a bad reaction to a common hydrochloride sedative. Once that was figured out, doctors quickly brought him back around, and he was fine.

What if…

But what if, through groundbreaking mixtures of compute, database, and AI technologies, a quick round of analyses on his blood and genome could have revealed his potential for such a reaction before it became a critical issue?

What if it were possible to devise a course of treatment specific to him and his body’s unique conditions, rather than accepting a cookie-cutter approach and dealing with the ill effects immediately after?

And what if that could be done with small, even portable medical devices equipped with high-bandwidth connectivity to larger resources?

In short, what if, through the power of superior computing and next-generation wireless connectivity, millions of people like my son could be quickly, accurately treated on-site rather than endure the cost and trauma of legacy medical methods?

Above: Pinpointing diagnosis and treatment that’s right for you.

These questions I asked about my son are at the heart of today’s efforts in precision medicine. It’s the practice of crafting treatments tailored to individuals based on their characteristics. Precision medicine spans an increasing number of fields, including oncology, immunology, psychiatry, and respiratory disorders, and its back end is filled with big data analytics.

Key Points:

  • Precision medicine uses a patient’s individual characteristics, including genetics, to identify highly specific, optimized healthcare steps.
  • 5G and new generations of wireless and processors are needed to provide the speed and accessibility required.
  • Optimizing workloads for parallelized processing makes precision medicine more practical.
  • Visions like Intel’s “All in One Day” use AI, 5G, and medical IoT to take a patient from examination to precision treatment in 24 hours.

Data drives individual-centric care

Pairing drugs to gene characteristics only covers a fraction of the types of data that can be pooled to target specific patient care.

Consider the Montefiore Health System in the Bronx. It has deployed a semantic data lake, an architecture for collecting large, disparate volumes of data and collating them into usable forms with the help of AI. Besides the wide range of data specific to patients collected onsite (including from a host of medical sensors and devices), Montefiore healthcare professionals also collate data from sources as needed, including PharmGKB databank (genetic variations and drug responses), the National Institute of Health’s Unified Medical Language System (UMLS), and the Online Mendelian Inheritance in Man (human genomic data).

Long story short, the Intel/Cloudera/Franz-based solution proved able to accurately create risk scores for patients, predict whether they would have a critical respiratory event, and advise doctors on what actions to take.

Above: The semantic data lake architecture implemented by Montefiore Health System pulls from multiple databases to address open-ended queries and provide a range of actionable healthcare results.

“We are using information for the most critically ill patients in the institution to try and identify those at risk of developing respiratory failure (so) we can change the trajectory,” noted Dr. Andrew Racine, Montefiore’s system SVP and chief medical officer.

Now that institutions like Montefiore can perform AI-driven analytics across many databases, the next step may be to integrate off-site communications via 5G networking. Doing so will enable physicians to contribute data from the field, from emergency sites to routine in-home visits, and receive real-time advice on how to proceed. Not only can this enable healthcare professionals to deliver faster, more accurate diagnoses, it may permit general physicians to offer specialized advice tailored to a specific patient’s individual needs. Enabling caregivers like this with guidance from afar is critical in a world that, researchers say, faces a shortage 15 million healthcare workers by 2030.

What it will take. (AI, for starters)

Enabling services like these is not trivial — in any way. Consider the millions of people who might need to be genetically sequenced in order to arrive at a broad enough sample population for such diagnostics. That’s only the beginning. Different databases must be combined, often over immense distances via the cloud, without sacrificing patients’ rights or privacy. Despite the clear need for this, according to the Wall Street Journal, only 4% of U.S. cancer patients in clinical trials have their genomic data made available for research,  leaving most treatment outcomes unknown to the research and diagnostic communities. New methods of preserving patient anonymity and data security across systems and databases should go a long way toward remedying this.

One promising example: using the processing efficiencies of Intel Xeon platforms in handling the transparent data encryption (TDE) of Epic EHR patient information with Oracle Database. Advocates say the more encryption and trusted execution technologies, such as SGX, can be integrated from medical edge devices to core data centers, the more the public will learn to allow its data to be collected and used.

Beyond security, precision medicine demands exceptional compute power. Molecular modeling and simulations must be run to assess how a drug interacts with particular patient groups, and then perhaps run again to see how that drug performs the same actions in the presence of other drugs. Such testing is why it can take billions of dollars and over a decade to bring a single drug to market.

Fortunately, many groups are employing new technologies to radically accelerate this process. Artificial intelligence plays a key role in accelerating and improving the repetitive, rote operations involved in many healthcare and life sciences tasks.

Pharmaceuticals titan Novartis, for example, uses deep neural network (DNN) technology to accelerate high-content screening, which is the analysis of cellular-level images to determine how they would react when exposed to varying genetic or chemical interactions. By updating the processing platform to the latest Xeon generation, parallelizing the workload, and using tools like the Intel Data Analytics Acceleration Library (DAAL) and Intel Caffe, Novartis realized nearly a 22x performance improvement compared to the prior configuration. These are the sorts of benefits healthcare organizations can expect from updating legacy processes with platforms optimized for acceleration through AI and high levels of parallelization.

Faster than trained radiologists

Interestingly, such order-of-magnitude leaps in capability, while essential for taming the torrents of data flowing into medical databases, can also be applied to medical IoT devices. Think about X-ray machines. They’re  basically cameras that require human specialists (radiologists) to review images and look for patterns of health or malady before passing findings to doctors. According to GE Healthcare, hospitals now generate 50 petabytes of data annually. A “staggering” 90% comes from medical imaging,” GE says, with more than 97% unanalyzed or unused. Beyond working to use AI to help reduce the massive volume of “reject” images, and thus cut reduce on multiple fronts, GE Healthcare teamed with Intel to create an X-ray system able to capture images and detect a collapsed lung (pneumothorax) within seconds.

Simply being able to detect pneumothorax incidents with AI represents a huge leap. However, part of the project’s objective was to deliver accurate results more quickly and so help to automate part of the diagnostic workload jamming up so many radiology departments. Intel helped to integrate its OpenVINO toolkit, which enables development of applications that emulate human vision and visual pattern recognition. Those workloads can then be adapted for processing across CPUs, GPU, AI-specific accelerators and other processors.

With the optimization, the GE X-ray system performed inferences (image assessments) 3.3x faster than without. Completion time was less than one second per image — dramatically faster than highly trained radiologists. And, as shown in the image above, GE’s Optima XR240amx X-ray system is portable. So this IoT device can deliver results from a wide range of places and send results directly to doctors’ devices in real time over fast connections, such as 5G. A future version could feed analyzed X-rays straight into patient records. There, they become another factor in the multivariate pool that constitutes the patient’s dataset, which in turn, enables personalized recommendations by doctors.

What we’re dealing with

By now, you see the problem/solution pattern:

  • Traditional medical practices are having trouble scaling across a growing, aging global population. Part of the problem stems from the medical industry generating far more data than its infrastructure can presently handle.
  • AI can help to automate many of the tasks performed by health specialists.
  • By applying AI to a range of medical data types and sources, care recommendations can be tailored to individual patients based on their specific characteristics for greater accuracy and efficacy rather than suggesting blanket practices more likely to yield unwanted outcomes.
  • AI can be accelerated through the use of hardware/software platforms designed specifically for those workloads.
  • AI-enabled platforms can be embedded within and connected to medical IoT devices, providing new levels of functionality and value.
  • IoT devices and their attached ecosystem can be equipped with connectivity such as 5G to extend their utility and value to those growing populations.

The U.S. provides a solid illustration of the impact of population in this progression. According to the U.S. Centers for Disease Control (CDC), even though the rate of new cancer incidents has flattened in the last several years, the country’s rising population pushed the number of new cases diagnosed from 1.5 million in 2010 to 1.9 million in 2020, driven in part by rising rates in overweight, obesity, and infections.

The white paper “Accelerating Clinical Genomics to Transform Cancer Care” (below) paints a stark picture of the durations involved in traditional approaches to handling new cancer cases from initial patient visit to data-driven treatment.

At each step, delays plague the process — extending patient anxiety, increasing pain, even leading to unnecessary death.

All in one day

Intel created an initiative called “All in One Day” to create a goal for the medical industry: take a patient from initial scan(s) to precision medicine-based actions for remediation in only 24 hours. This includes genetic sequencing, analysis that yields insights into the cellular- and molecular-level pathways involved in the cancer, and identification of gene-targeted drugs able to deliver the safest, most effective remedy possible.

To make All in One Day possible, the industry will require secure, broadly trusted methods for regularly exchanging petabytes of data. (Intel notes that a typical genetic sequence creates roughly 1TB of data. Now, multiply that across the thousands of genome sequences involved in many genomic analysis operations.) The crunching of these giant data sets calls for AI and computational horsepower beyond what today’s massively parallel accelerators can do. But the performance is coming.

As doctors will have to service ever-larger patient populations, expect them to need data results and visualizations delivered to wherever they may be, including in forms animated or rendered in virtual reality. This will require 5G-type wireless connectivity to facilitate sufficient data bandwidth to whatever medical IoT devices are being used.

If successful, more people will get more personalized help and relief than ever possible. The medical IoT and 5G dovetail with other trends now reshaping modern medicine and making these visions everyday reality. A 2018 Intel survey showed that 37% of healthcare industry respondents already use AI; the number should rise to 54% by 2023. Promising new products and approaches appear daily. A few recent examples are here, here and here.

As AI adoption continues and pairs with faster hardware, more diverse medical devices, and faster connectivity, perhaps we will soon reach a time when no parent ever has to watch an unresponsive child whisked away by ambulance because of adverse reactions that might have been avoided through precision medicine and next-gen technology.

This article is part of the Technology Insight series, made possible with funding from Intel.

Source: AI, 5G, and IoT can help deliver the promise of precision medicine

10 Disruptive Trends for 2020

10 Disruptive Trends for 2020

Twenty years ago, when I started advisingstartups and Fortune 500 companieson their innovation strategies, a “2020 vision” served as a key staple in most business planning efforts. The future is finally here.

Emerging technologies catalyze disruption. But 2020 promises to be especially extra turbulent. Election year dynamics, coupled with an increase in grassroots business activism, and governments taking action on environmental issues, will infuse even greater chaos into our everyday experiences.

Here’s my take on the biggest forces transforming business and society in 2020:

  1. Embedded Intelligence– The convergence of the internet of things, data analytics, and artificial intelligence creates new opportunities for “embedded intelligence,” making products, robots, business applications and healthcare diagnostics, “smarter” than ever before.
  2. Energy Everywhere– High-capacity batteries deliver greater amounts of power at dramatically lower costs. This will drive innovation across sectors, but will be especially useful for green energy applications like solar power
  3. Circular Economy– From recycling, to alternative bio-based replacements for plastic, the “circular economy” will gain steam, creating new opportunities for companies to help accelerate environmentally sustainable products and packaging.
  4. Transportation Revolution– Battery-powered cars, trucks, and motorcycles will continue to increase in popularity, and mainstream manufacturers, will begin an accelerated march toward a future where self-driving vehicles rule the road.
  5. Entertainment Fragmentation– The proliferation of competing content providers and apps, like Netflix, Amazon Prime Video, Disney+, and Hulu, will continue to contribute to the rapid growth of niche viewing audiences and the accelerated demise of cable television.
  6. Data Cocooning– Growing concerns of Alexa, Siri, and other voice-based services “listening in” on our lives combined with high-profile data breaches and election year ballot-tampering fears will continue to drive greater privacy laws and controls over personal data.
  7. Digital Money– The rise of digital wallets like Apple Pay and Google Pay, peer-to-peer payment platforms like Venmo, and Facebook’s 2020 launch of its digital currency, Libra, will dramatically accelerate the use of digital commerce between individuals and across business.
  8. Web Marketing Backlash– From Google’s misleading results format–which obscures the distinction between paid and organic content–to misleading social media advertising, society will grow increasingly weary of big tech’s role in shaping our worldview.
  9. Digital Organizations– More and more companies recognize that moving fast, being agile, and collaborating across internal boundaries and with outside partners is the only way to compete. Collaboration tools for remote working, turn-key “no code” software for building instantly customizable business processes, and digital infrastructure for all business functions transform organizations.
  10. Predictable Wildcards– Unforeseen disruptions are now an inherent part of life, instilling the need for “responsive strategy” – the continous process of trend scanning, scenario planning, and trial-and-error testing of business models based on the disruptive trends occurring in the external world.

Regardless of your industry or target markets, watch these disruptors closely. They’ll create new opportunities for entrepreneurs. And they’ll pose big threats for anyone with a business-as-usual approach–because 2020 will be anything but usual.

Published on: Feb 5, 2020

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

Source: 10 Disruptive Trends for 2020

Automatic Artificial Intelligence System Learns to Diagnose Disease

Automatic Artificial Intelligence System Learns to Diagnose Disease

Wearable health monitors, ubiquitous sensors, and the ability to collect and store huge amounts of data are creating challenges for researchers hoping to use artificial intelligence to identify diseases. While the gathered data can hold important clinical answers, finding those answers means that the data must be categorized and labeled.

Now, researchers at MIT have developed a system that can autonomously identify signs of a disease from data gathered from a relatively small group of people and without any initial training.

The research, recently presented at the Machine Learning for Healthcare conference in Ann Arbor, Michigan, focused on learning the audio biomarkers of vocal cord disorders. Using data gathered over a week from an accelerometer attached to the necks of 100 people, the system automatically identified which sound characteristics were important for identifying whether a patient has vocal cord nodules.

“It’s becoming increasing easy to collect long time-series datasets. But you have physicians that need to apply their knowledge to labeling the dataset,” said lead author Jose Javier Gonzalez Ortiz, a PhD student at MIT. “We want to remove that manual part for the experts and offload all feature engineering to a machine-learning model.”

While the system was utilized for a specific sound-related task, it can be trained to analyze data from other diseases. The current study may help to create tools that prevent vocal nodules and help to study the onset of this condition.

Via: MIT

TrendMD v2.4.6

Source: Automatic Artificial Intelligence System Learns to Diagnose Disease

Just how big a deal is Google’s new Meena chatbot model?

Just how big a deal is Google’s new Meena chatbot model?

Technology behemoths like Google and Facebook have got us used to, even fatigued by, their never-ending string of impressive announcements of progress in the AI field. Nevertheless, when Google announced that it has built a “conversational agent that can chat about… anything,” even the most jaded amongst us had to pay attention.

Since I work in the field, helping organizations build conversational solutions, I was particularly intrigued. One of the biggest challenges for bots is to handle the infinite possible phrases that a user might say and respond appropriately. A bot that can chat about anything seems like just the thing we would need to solve this challenge. So the question becomes, exactly what impact Google’s new bot, called Meena, will have on organizations looking to deploy conversational AI applications. Have we found the holy grail? Will our bots finally stop saying “I’m sorry, I didn’t quite understand that”? Well, the short answer is that no, we are not quite there yet. Nevertheless, Meena is incredibly impressive and represents a fascinating attempt to solve the problem. In the next few paragraphs, I will summarize what Google did and how this might impact conversational AI in the days, months, and years to come.

What is Meena?

Let’s start by analyzing what we are dealing with here. What did Google invent?

Well, Meena is a 2.6 billion parameter end-to-end trained neural conversational model. The best version of Meena, according to Google, was trained over 30 days using 2,048 tensor processing units (Google’s dedicated AI-specific chip) on a dataset of 40 billion words. Not just random words. Google mined public domain social media for “multi-turn conversations,” where a turn is a pair of phrases in a conversation. So Google went out and got our conversations, 40 billion words worth of them, and trained a neural net to reply by showing it seven turns of a conversation as the input. By any measure, Meena is vast. Even if Google released all its code, which it hasn’t, only a few organizations would be able to train a similar Meena-like model. That is the first thing to understand. Meena is very much still in the lab and is very very complex to manage. You cannot incorporate it in a tool just yet, and it is unlikely Google will make it available as a service soon. So in the short-to-medium term, our bots will have to survive without Meena’s help, I am afraid.

What can Meena do?

Meena is not immediately available, as is typically the case with cutting-edge research. It takes time to make its way to actual products. But what does this research tell us about what direction our products might take in the medium-to-long term? Will we really be able to chat about anything, and is Meena really the best chatbot out there? Here is where things get interesting.

First, to claim that Meena outperforms other chatbots, we would need some evaluation criteria. Google introduced a new metric to help with this called the “Sensibleness and Specificity Average” or SSA. The innovation this score introduces is that it measures both whether a bot’s answer is sensible, i.e. what a human would reasonably rate as an appropriate response, and whether the response is specific. Typically, bots employ “tricks” to make you think they are keeping up with the conversation, when in practice they are just giving generic responses that are not necessarily specific to the context. For example, you could have a conversation such as:

Human: I really love spy films!

Bot: Amazing. Tell me more!

This is a sensible response, but it is not specific. The same answer works for any number of statements from a person. A more specific reply would be:

Human: I really love spy films!

Bot: Amazing. I like all the Mission Impossible spy films. Which is your favorite?

Google scored Meena, and a group of other chatbots (XiaoIce, Mitsuku, CleverBot, and DialoGPT) on specificity and sensibleness, and the average of that gave the final SSA score. There is some nuance in the numbers as Google describes them, but roughly:

Meena scored 79% SSA, Mitsuku and Cleverbot 56%, DialoGPT 48%, and XiaoIce 31%. Given that this metric can be used to evaluate human conversations as well, Google measured the average human SSA at 86%, so Meena gets tantalizingly close to that.

To summarize, based on Google’s own scoring approach that directly measures whether the responses of the bot are both sensible and specific in conversations of up to 7 turns, Meena scores higher than the other chatbots out there. To give some context, Mitsuku is the winner of the Loebner Prize Turing Test, and XiaoIce powers an immensely popular Microsoft service that converses with hundreds of millions of users. Even though one can easily find weaknesses with the scoring approach and argue about the objectivity of Google using a metric it came up with itself, what Meena did is impressive. Even more so when we consider that Meena is an end-to-end trained neural net model while Mitsuku and XiaoIce are hybrid systems with much more human intervention.

What is the impact?

Meena can chat, over a few turns of a conversation, believably. Meena, however, cannot reliably teach you anything. Meena is not trying to help you finish a task or learn something new specifically. It converses with no explicit goal or purpose. While we probably spend too much of our time chatting about not much of importance, we tend to be looking for something specific when interacting with a bot-powered digital service. We want to get a ticket booked or a customer support issue resolved. Or we want to get accurate information about a particular domain or emotional or psychological support for a challenge we are facing.

Conversational products have a purpose, and even if they fail at the more open-ended questions, they are trying to work with you to complete a task. Meena places the human-likeness of the conversation above all. However, there is much for us to learn about what is an appropriate conversational approach given different types of tasks. There is research that shows that more “robot” like responses are preferable in certain situations (especially where sensitive personal information is involved) and that being human-like is not the end-all and be-all of bots. Where does Meena, with the conversations it has learned from social media interactions, find a role? And if it is plugged into a conversational experience, how do we guarantee that inappropriate things are not said? Are the millions of public domain social media conversations the right dataset for the best chatbot in the world?

The bottom line

Meena is a fantastic contribution to the chatbot space. It is hard to capture the enormity of the task Google has achieved here. But we need to be careful about how we communicate the results of that research. Descriptions such as “the bot that can chat about anything” or “the best chatbot” are not necessarily useful. They distract from what is really important about this research — defining human-like conversation and exploring what role or importance there is in the chatbot world for that type of conversation. As more and more conversational AI solutions enter our daily lives, we need to focus on what is most valuable for us as humans. Meena moves us closer to that goal but doesn’t quite get us there yet.

Ronald Ashri is cofounder and CTO of conversational AI consultancy GreenShoot Labs and author of “The AI-Powered Workplace.” 

Source: Just how big a deal is Google’s new Meena chatbot model?

%d bloggers like this: