If you consider Google is the leader globally in artificial intelligence, DeepMind is their crown jewel.
When they moved the DeepMind Health unit, the healthcare subsidiary, into their main company — that broke a pledge that ‘data will not be connected to Google accounts’ — you knew Google was cutting corners.

Google’s AI Supremacy is an Existential Threat

Bigger than the Department of Justice going after Google for antitrust is the harm DeepMind could do to the future of artificial intelligence. They are arguably the leader in deep learning. The choices they make will decide many things about the fate of humanity in an AI-centric world.
The next real interface after smart phones is the neural interface and a Google powered neural interface (beyond ear buds and Voice AI) will power the next era of augmented humans.

AI will be a More Dangerous Monopoly than Advertising in the Future

Enhanced and biotech engineered humans with powerful AI hookups will just be the new kind of human in 20 or 50 years.
If in 2019 AI can pick the right stocks and build winning portfolios, what will it be able to do in 20 or 30 years? DeepMind has a lot of hype in Chess, Go and now gaming experiments. It can do a lot with healthcare and patient data via GoogleBrain.
Making machines intelligent is wonderful for Google but it’s also fairly dangerous for humanity. This is because software engineering and AI don’t have global regulations in place to protect us from it. If machine learning algorithms have taught us anything it’s that there are unforeseen dangers in unleashing this stuff upon the world.

Google’s Leadership Has Been Sketchy at Best in the Last 5 Years

If you followed the fiasco of Google’s attempt at creating an AI ethics board, you realize their basic level of maturity and sophistication on the matter. Google is an internally politically divided nightmare with weak leadership. They err on the side of profit and monetization, not ethics and transparency. I’m not sure many of want to live in a world filled to the brim with machine intelligence. Google is a monopoly of how this is implemented into healthcare and other industries and it’s downright scary.

DeepMind Could Go Wrong in So Many Ways for Humanity

DeepMind is doing great things but it’s exactly for that reason that it’s an existential threat. DeepMind’s health unit being handed over to Google’s full control is the kind of reason I no longer trust DeepMind to do the right thing for humanity. There have been other red flags and warning signs though. From Google helping killer drones to creating a censorship product for China, it’s hard to trust this company with the emergence of AI for human rights. When your own employees protest, it’s usually not a good sign.
Recently working on mastering complex cooperative agency in gaming should spark concern that DeepMind has the potential to become dangerous and will be used in military contexts. Google is by all accounts one of the biggest profiteers of ad revenue and military contracts. In case you are wondering, this is not a world of clean money, but easy cash-cows and basically a how-to of how to use machine learning in a dangerous manner.

DeepMind is a Leader Already in AI

Google can boast about publications in Nature or Science, but I’d be really worried considering the recent track record of Google executives in ethics. As I write this and read it aloud, my Google home is spinning in faithful surveillance. DeepMind impresses and worries me, and I’m not alone.
DeepMind might one day use its access to data to “drive monopolistic profits” for Google parent company Alphabet. It’s a risk before it has even become a reality, because AI is a hyped up industry and its consequences will be for the entire species. Nothing occurs in isolation. DeepMind is a concentration of the best AI talent on the planet. Google sees a world of products and monetization, not a landscape where AI needs ethical limits and regulation. Tech companies won’t and are unable to police themselves. It’s just not profitable or even practical to do so.
As a futurist I’m increasingly writing about tech and AI regulation. DeepMind is the elephant in the room that is likely to be Google’s biggest triumph and Alphabet’s biggest disaster. I say this not to stoke fear, but because I believe it to be true. Deep learning, while not as incredible as we suppose, is still a force that will and has been weaponized. It will take decades after 2019 until AI will properly be regulated. In that grey zone anything goes and the race to implement AI in healthcare will make FAANG companies $Billions.

The AI Fallacy: that It Will Help People

The narrative around AI is that it will empower people, reduce costs and make the world a better place. Not unlike the myth Facebook used with social media’s raison d’etre. AI hype is the single biggest fraud in the Tech world today, but DeepMind is on another level from what most other companies call AI.
  • How do you teach a Machine right from wrong?
  • When will AI regulation even come to pass, and what will happen unregulated before then?
  • When will software developers and the bro-culture therein get an independent ethics board?
  • There are so many related questions that have no clear answers in 2019.
We live in a world where Tech executives have more power than politicians, more power than Prime Minsters or Presidents. It’s about time we take that seriously. The decisions they make now can literally impact the future of humanity in ways even they do not understand.
DeepMind appears to be a pioneer, but what are the dangers around it? Nobody seems to ask the question or if they do, it’s suppressed. Convenient when Google owns how modern SEO and visibility online even works.
Contact :
Kingdavidoheb@gmail.com 
+2348135802101