Americas

  • United States

Asia

tmacaulay
Senior Online Editor

UK data science and AI predictions for 2020

feature
Dec 30, 20197 mins
Artificial IntelligenceBig DataData Center

IT industry insiders expect the data and artificial intelligence boom to continue through the new year

ai artificial intelligence man graphic interface data
Credit: Art24h / Getty Images

Data science and AI had a big year in 2019. AI funding in the UK surpassing the total for the previous year within the first six months of the year, according to research by Tech Nation and 80 percent of respondents to an MHR Analytics survey say they planned to hire a data scientist or seek data consultancy before the end of the year.

The rise is set to continue in 2020 as the disciplines continue to permeate more sectors and organisations, but the deployments will not be straightforward and there are major dangers ahead for those who get them wrong,

The most prominent of these risks involve the General Data Protection Regulation (GDPR). The law has been in place since May 2018, but it was only last year that the UK’s data protection watchdog issued the first penalties for breaches of its rules, slapping British Airways and Marriott with respective fines of £183m and £99m on consecutive days in July.

Read next: The biggest ICO fines for data protection breaches and GDPR contraventions

The growing number of breaches and the eye-popping penalties that they can incur leads Blake Collins, research analyst at cyber security firm SiteLock, to envision a booming demand for “data breach hunters” who search for vulnerabilities before threat actors can actively exploit them.

“This phenomenon is a byproduct of a systemic problem: technology can be difficult for many to understand and if not leveraged properly for proactive protection, could cost an organisation money in terms of downtime, loss of reputation, and resources spent fixing a security-related problem,” he said.

“The drawback? Unless you’re hiring a cybersecurity professional to find the leak, someone less honorable most likely will. Or worse, if a company is lucky enough to have this type of data responsibly disclosed for free, it may not recognise the severity of the issue and may not have the resources or know-how to address the issue effectively.”

John Buyers, who leads the commercial practice at international law firm Osborne Clarke, believes the growth in AI use cases will create friction with the high standard for consent set in GDPR.

“We’re already seeing instances of companies refusing to implement machine learning because of an inability to sufficiently meet GDPR consent standards for processing personal data – whether that is systems which recognise faces, understand voices or provide customised online experiences,” he said.

“This is something which has been explicitly recognised by some European regulators – most recently the Irish Data Protection Commissioner Helen Dixon at a speech in Dublin in November 2019.”

Enterprises will need to respond by making governance an integral part of their AI systems, as Dataiku chief customer officer Kurt Muehmel explains.

“Enterprise AI platforms will comply by incorporating governance systems to ensure AI is controlled and calculated and based on models that people can easily explain and understand,” Muehmel said.

Ethical AI

Any legal developments will be playing catch up to ethical concerns, which is set to become a business concern as well as a moral one as public awareness about privacy and bias continues to grow.

The risks of algorithmic decision-making will remain hard to address if the systems that deliver them stay hidden in black boxes. As a result, Matt Sanchez, Chief Technology Officer at CognitiveScale, believes that understanding fairness, bias, explainability and robustness of AI models will become as important as understanding their performance and effectiveness in 2020.

“In certain industries, it will be difficult to justify the value of an AI system without scoring and selecting models based on these additional criteria because the risk will be deemed too high,” he said.

“Look for most leading data science platforms and machine learning toolkits to start including tools to help developers understand these aspects of machine learning. Also, look for regulators, legislators, and courts of law to start asking to understand these issues at a deeper level as more cases surface where consumer trust is breached due to data misuse or perceived algorithmic deficiencies.”

The growing demand from both consumers and governments for greater scrutiny of AI and data-driven technologies leads Genpact chief digital officer Sanjay Srivastava to predict the rise of digital ethics officers.

“These officers will be responsible for implementing ethical frameworks to make appropriate decisions about the use of new technologies, considerations around security and bias, and preparing for the technology challenges still to come,” he said.

John Gikopoulos, global head of AI and automation at Infosys Consulting, envisions a similar role emerging with a more specific focus: the AI ethicist.

“In 2020, we will start to see enterprises employ people or even teams of people whose main role will be to formulate the ethics of our new AI-powered world,” he said. “These AI ethicists will need to liaise with the ecosystem of affiliated AI entities and gradually create, from the bottom up, the rules and conditions that will define the field of play.”

Adoption answers

Even if these ethical and legal barriers are overcome, developing AI solutions remains costly and resource-intensive. Sanjay Srivastava of Genpact predicts that businesses will respond by using a “transformation-as-a-service” adoption model.

“This model allows organisations to gain access to AI technologies that have already been trained on basic tasks and knowledge,” he said. “This, along with access to other data and cloud technologies, can significantly cut down on the resources required to keep up with shifting business strategies and customer demands.”

John Gikopoulos, Global Head of AI and Automation at Infosys Consulting, anticipates these issues triggering demand for a similar concept from the same nomenclature: AI-as-a-Service.

“What’s so exciting about AI-as-a-service is not just that the huge economies of scale will make the technology available to every organisation that wants to use it. It will also give us the much-missed ability to harness all the infrastructure, platforms and knowledge towards creating real and sustainable value,” he said.

“By packaging AI as part of a solution, we’ll make it much easier to identify valuable new use cases while providing a platform with end-to-end responsibility for delivering them.”

Srivastava also expects businesses to ease the path to adoption and democratise access to the technology by deploying AI accelerators that have been pretrained on the necessary domain expertise.

“By 2025, it’s estimated that organisations that are AI leaders will be 10 times more efficient and hold twice the market share of those who fail to embrace the technology,” he said. “Companies that fail to accelerate AI adoption will lose significant market share – making this a matter of survival for many organizations.”

Evolving skills

By 2025, around 463 exabytes of data will be created each day, according to research by special reports publisher Raconteur. Srivastava believes this growth will reduce the value of data and increase that of human judgment, as the vast opportunities hidden in the information need to be unlocked by someone who can make final decisions and drive action. He recommends that enterprises reskill their staff to find this value.

“Rather than sticking to traditional classroom settings, identify employee ‘experts’ who have knowledge others need and share that expertise, thus harnessing the collective intelligence within the organisation,” he said.

This will help deepen the pool of available AI talent, which will become harder to find as the demand for AI solutions grows.

John LaRocca, managing director at Fractal Analytics, expects businesses to adapt to this dilemma by enabling more applications to be developed by non-AI professionals.

“Non-AI practitioners, such as knowledge workers and analysts, who are not skilled AI practitioners (but have great domain expertise), will start to develop rudimentary applications aided by automated AI engines,” he said. “The onus will be on corporate training programs to retrain/upskill these new practitioners and on IT to enable them with automated AI environments that use AI itself (e.g., machine learning apps to help develops train models without having to write code).

“This is not unlike the historical lifecycle of analytics, and it will similarly benefit everyone in the ecosystem – businesses will expand their capacity to develop and benefit from AI apps, AI experts will be working on truly leading-edge applications and tie newly upskilled non-AI practitioners who will contribute more and have more marketable skills.”

tmacaulay
Senior Online Editor

Tom is a senior online editor across Computerworld, Techworld & CIO in the UK. Tom studied English Literature and History at Sussex University before gaining a Masters in Newspaper Journalism from City University. He's particularly interested in the public sector and the ethical implications of emerging technologies.

More from this author