Welcome to our latest Technology & Digital round-up of legal and non-legal tech-related news stories. This edition covers: the latest on AI governance; a new online safety law; a flurry of ICO guidance; and much more.
“Unsurprisingly, in this month’s edition of the Technology & Digital round-up a lot of the focus is on the future of AI governance as we approach the first global AI safety summit which we now know takes place here in the UK on 1 and 2 November. We’re also about to see the much-discussed Online Safety Bill become law. Aimed in particular at protecting children, it includes other provisions such as tackling online fraud. Tech companies are concerned about powers requiring them to scan encrypted messages, with Signal’s president already indicating it would leave if “forced to build a backdoor”. – Luke Jackson
The legal part…
- The House of Commons Science, Innovation and Technology Committee published its interim report on AI governance – urging the government to bring in any new legislation in the next Parliament, or risk being left behind by other legislation like the EU AI Act that “could become the de facto standard and be hard to displace”.
- The report sets out 12 challenges that AI governance should meet. The Committee expects the government to say how it will address them when it responds by the end of October. The challenges should form the basis for discussion at the first global AI safety summit, which the government confirmed will be held at Bletchley Park on 1 and 2 November.
- A new pilot scheme is set to launch in 2024 to provide tailored advice to businesses on how to meet regulatory requirements for digital technology and AI. The service will be run by members of the Digital Regulation Cooperation Forum, comprising the Information Commissioner’s Office, Ofcom, the Competition and Markets Authority and the Financial Conduct Authority.
- The CMA published its initial report following a review of AI foundation models, described as large, machine learning models trained on vast amounts of data. It proposes a set of principles to guide competitive AI markets and protect consumers.
- The much-debated Online Safety Bill is finally due to become law. Ofcom says it will consult very soon on the first set of standards it expects tech firms to meet in tackling illegal online harms.
- The TUC launched a new AI taskforce as it called for urgent legislation to safeguard workers’ rights and make sure AI benefits all. The taskforce aims to publish an AI and Employment Bill early in 2024 which it will lobby to have incorporated into UK law.
- The ICO is consulting until 20 October on the first phase of its draft biometric data guidance. AI is a priority area for the ICO.
- We’ve also seen detailed ICO guidance for employers to help them understand their data protection obligations when handling workers’ health information.
- And the ICO has been warning organisations to use alternatives to the blind carbon copy (BCC) email function when sending emails containing sensitive personal information, following “a catalogue of business blunders”. It has published new guidance on sending bulk communications by email.
- Over in Europe, the European Commission named 6 digital platforms as gatekeepers under the new Digital Markets Act. The big tech firms, including Microsoft and Google’s parent company Alphabet, will have 6 months to bring their designated core platform services into compliance.
- The new EU-US Data Privacy Framework, barely 2 months old, is already being challenged in court by a French MP due to concerns over US mass surveillance and the fact it was notified to EU countries in English only and not published in the Official Journal. As explained in this earlier edition of the Technology & Digital round-up, we’re waiting for the government’s own adequacy decision before organisations can rely on the UK extension to the Framework.
…and in other news
- Earlier this week we were delighted to host with BDO a lively session at Leeds Digital Festival on the future of the metaverse. Click here for our pick of must attend events.
- The National Cyber Security Centre and National Crime Agency published a report on ransomware, extortion and the cyber crime ecosystem. See this blog post for details and the NCSC’s guide to ransomware.
- The NCSC urged organisations building services that use large language models to exercise caution, in the same way they would if using a product or code library that was in beta.
- The NCSC also explained why established cyber security principles are still important when developing or implementing machine learning models.
- Google DeepMind released a catalogue of genetic mutations to help pinpoint the cause of diseases, developed using a new AI tool.
- Bristol will host one of Europe’s most powerful supercomputers to drive pioneering AI research and innovation in the UK.
- The Institute for the Future of Work published a report on the adoption of AI in UK firms and the consequences for jobs. Its co-founder and director said that, with the AI summit fast approaching, the government must “act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology”.
- Over £50 million of government funding has been awarded to 30 cutting-edge manufacturing projects.
- Elon Musk’s Neuralink announced it’s received approval to recruit for the first human clinical trial of its wireless brain-computer interface for enabling people with paralysis to control external devices with their thoughts.
- And finally, at a conference reported on by the Law Society Gazette, Court of Appeal judge Lord Justice Birss said a ChatGPT-produced summary of an area of law, inserted into a judgment, was “jolly useful”. (The judge took personal responsibility for the contents, knew the answer, and could recognise ChatGPT’s output as acceptable.)